Skip to content

Commit

Permalink
Merge pull request #1029 from e-sensing/dev
Browse files Browse the repository at this point in the history
Fix documentation problems
  • Loading branch information
gilbertocamara authored Oct 29, 2023
2 parents ea862ee + b8a632f commit d0412e6
Show file tree
Hide file tree
Showing 49 changed files with 356 additions and 412 deletions.
2 changes: 1 addition & 1 deletion R/sits_accuracy.R
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
#' @author Alber Sanchez, \email{alber.ipia@@inpe.br}
#' @description This function calculates the accuracy of the classification
#' result. For a set of time series, it creates a confusion matrix and then
#' calculates the resulting statistics using the R package "caret". The time
#' calculates the resulting statistics using package \code{caret}. The time
#' series needs to be classified using \code{\link[sits]{sits_classify}}.
#'
#' Classified images are generated using \code{\link[sits]{sits_classify}}
Expand Down
39 changes: 19 additions & 20 deletions R/sits_active_learning.R
Original file line number Diff line number Diff line change
Expand Up @@ -13,20 +13,20 @@
#' These points don't have labels and need be manually labelled by experts
#' and then used to increase the classification's training set.
#'
#' This function is best used in the following context
#' \itemize{
#' \item{1. }{Select an initial set of samples.}
#' \item{2. }{Train a machine learning model.}
#' \item{3. }{Build a data cube and classify it using the model.}
#' \item{4. }{Run a Bayesian smoothing in the resulting probability cube.}
#' \item{5. }{Create an uncertainty cube.}
#' \item{6. }{Perform uncertainty sampling.}
#' }
#' This function is best used in the following context:
#' 1. Select an initial set of samples.
#' 2. Train a machine learning model.
#' 3. Build a data cube and classify it using the model.
#' 4. Run a Bayesian smoothing in the resulting probability cube.
#' 5. Create an uncertainty cube.
#' 6. Perform uncertainty sampling.
#'
#' The Bayesian smoothing procedure will reduce the classification outliers
#' and thus increase the likelihood that the resulting pixels with high
#' uncertainty have meaningful information.
#'
#' @param uncert_cube An uncertainty cube. See \code{sits_uncertainty}.
#' @param uncert_cube An uncertainty cube.
#' See \code{\link[sits]{sits_uncertainty}}.
#' @param n Number of suggested points.
#' @param min_uncert Minimum uncertainty value to select a sample.
#' @param sampling_window Window size for collecting points (in pixels).
Expand Down Expand Up @@ -158,21 +158,20 @@ sits_uncertainty_sampling <- function(uncert_cube,
#' this label compared to all others. The algorithm also considers a
#' minimum distance between new labels, to minimize spatial autocorrelation
#' effects.
#' This function is best used in the following context:
#' 1. Select an initial set of samples.
#' 2. Train a machine learning model.
#' 3. Build a data cube and classify it using the model.
#' 4. Run a Bayesian smoothing in the resulting probability cube.
#' 5. Perform confidence sampling.
#'
#' This function is best used in the following context
#' \itemize{
#' \item{1. }{Select an initial set of samples.}
#' \item{2. }{Train a machine learning model.}
#' \item{3. }{Build a data cube and classify it using the model.}
#' \item{4. }{Run a Bayesian smoothing in the resulting probability cube.}
#' \item{5. }{Create an uncertainty cube.}
#' \item{6. }{Perform confidence sampling.}
#' }
#' The Bayesian smoothing procedure will reduce the classification outliers
#' and thus increase the likelihood that the resulting pixels with provide
#' good quality samples for each class.
#'
#' @param probs_cube A probability cube. See \code{sits_classify}.
#' @param probs_cube A smoothed probability cube.
#' See \code{\link[sits]{sits_classify}} and
#' \code{\link[sits]{sits_smooth}}.
#' @param n Number of suggested points per class.
#' @param min_margin Minimum margin of confidence to select a sample
#' @param sampling_window Window size for collecting points (in pixels).
Expand Down
64 changes: 30 additions & 34 deletions R/sits_classify.R
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
#' @title Classify time series or data cubes
#'
#' @name sits_classify
#'
#' @author Rolf Simoes, \email{rolf.simoes@@inpe.br}
#' @author Gilberto Camara, \email{gilberto.camara@@inpe.br}
#'
Expand All @@ -10,16 +8,13 @@
#' a trained model prediction model created by \code{\link[sits]{sits_train}}.
#'
#' SITS supports the following models:
#' \itemize{
#' \item{support vector machines: } {see \code{\link[sits]{sits_svm}}}
#' \item{random forests: } {see \code{\link[sits]{sits_rfor}}}
#' \item{extreme gradient boosting: } {see \code{\link[sits]{sits_xgboost}}}
#' \item{multi-layer perceptrons: } {see \code{\link[sits]{sits_mlp}}}
#' \item{1D CNN: } {see \code{\link[sits]{sits_tempcnn}}}
#' \item{deep residual networks:}{see \code{\link[sits]{sits_resnet}}}
#' \item{self-attention encoders:}{see \code{\link[sits]{sits_lighttae}}}
#' }
#'
#' (a) support vector machines: \code{\link[sits]{sits_svm}};
#' (b) random forests: \code{\link[sits]{sits_rfor}};
#' (c) extreme gradient boosting: \code{\link[sits]{sits_xgboost}};
#' (d) multi-layer perceptrons: \code{\link[sits]{sits_mlp}};
#' (e) 1D CNN: \code{\link[sits]{sits_tempcnn}};
#' (f) deep residual networks: \code{\link[sits]{sits_resnet}};
#' (g) self-attention encoders: \code{\link[sits]{sits_lighttae}}.
#'
#' @param data Data cube (tibble of class "raster_cube")
#' @param ml_model R model trained by \code{\link[sits]{sits_train}}
Expand All @@ -40,7 +35,7 @@
#' (integer, min = 1, max = 16384).
#' @param multicores Number of cores to be used for classification
#' (integer, min = 1, max = 2048).
#' @param gpu_memory Memory available in GPU (default = NULL)
#' @param gpu_memory Memory available in GPU in GB (default = 16)
#' @param n_sam_pol Number of time series per segment to be classified
#' (integer, min = 10, max = 50).
#' @param output_dir Valid directory for output file.
Expand All @@ -56,30 +51,31 @@
#' (tibble of class "probs_cube").
#'
#' @note
#' The "roi" parameter defines a region of interest. It can be
#' The \code{roi} parameter defines a region of interest. It can be
#' an sf_object, a shapefile, or a bounding box vector with
#' named XY values ("xmin", "xmax", "ymin", "ymax") or
#' named lat/long values ("lon_min", "lat_min", "lon_max", "lat_max")
#' named XY values (\code{xmin}, \code{xmax}, \code{ymin}, \code{ymax}) or
#' named lat/long values (\code{lon_min}, \code{lon_max},
#' \code{lat_min}, \code{lat_max})
#'
#' The "filter_fn" parameter specifies a smoothing filter to be applied to
#' time series for reducing noise. Currently, options include
#' Savitzky-Golay (see \code{\link[sits]{sits_sgolay}}) and Whittaker
#' (see \code{\link[sits]{sits_whittaker}}).
#' Parameter \code{filter_fn} parameter specifies a smoothing filter
#' to be applied to each time series for reducing noise. Currently, options
#' are Savitzky-Golay (see \code{\link[sits]{sits_sgolay}}) and Whittaker
#' (see \code{\link[sits]{sits_whittaker}}) filters.
#'
#' The "memsize" and "multicores" parameters are used for multiprocessing.
#' The "multicores" parameter defines the number of cores used for
#' processing. The "memsize" parameter controls the amount of memory
#' available for classification. We recommend using a 4:1 relation between
#' "memsize" and "multicores".
#' Parameter \code{memsize} controls the amount of memory available
#' for classification, while \code{multicores} defines the number of cores
#' used for processing. We recommend using as much memory as possible.
#'
#' When using a GPU for deep learning, \code{gpu_memory} indicates the
#' memory of available in the graphics card.
#'
#' For classifying vector data cubes created by
#' \code{\link[sits]{sits_segment}}, two parameters can be used:
#' \code{n_sam_pol}, which is the number of time series to be classified
#' per segment.
#' \code{\link[sits]{sits_segment}},
#' \code{n_sam_pol} controls is the number of time series to be
#' classified per segment.
#'
#' @note
#' Please refer to the sits documentation available in
#' <https://e-sensing.github.io/sitsbook/> for detailed examples.
#' Please refer to the sits documentation available in
#' <https://e-sensing.github.io/sitsbook/> for detailed examples.
#' @examples
#' if (sits_run_examples()) {
#' # Example of classification of a time series
Expand Down Expand Up @@ -165,7 +161,7 @@ sits_classify.sits <- function(data,
...,
filter_fn = NULL,
multicores = 2L,
gpu_memory = NULL,
gpu_memory = 16,
progress = TRUE) {
# Pre-conditions
data <- .check_samples_ts(data)
Expand Down Expand Up @@ -197,7 +193,7 @@ sits_classify.raster_cube <- function(data,
end_date = NULL,
memsize = 8L,
multicores = 2L,
gpu_memory = NULL,
gpu_memory = 16,
output_dir,
version = "v1",
verbose = FALSE,
Expand Down Expand Up @@ -350,7 +346,7 @@ sits_classify.segs_cube <- function(data,
end_date = NULL,
memsize = 8L,
multicores = 2L,
gpu_memory = NULL,
gpu_memory = 16,
output_dir,
version = "v1",
n_sam_pol = 40,
Expand Down
19 changes: 11 additions & 8 deletions R/sits_cluster.R
Original file line number Diff line number Diff line change
Expand Up @@ -6,18 +6,20 @@
#' sits. They provide support from creating a dendrogram and using it for
#' cleaning samples.
#'
#' \code{sits_cluster_dendro()} takes a tibble containing time series and
#' \code{link[sits]{sits_cluster_dendro()}} takes a tibble with time series and
#' produces a sits tibble with an added "cluster" column. The function first
#' calculates a dendrogram and obtains a validity index for best clustering
#' using the adjusted Rand Index. After cutting the dendrogram using the chosen
#' validity index, it assigns a cluster to each sample.
#'
#' \code{sits_cluster_frequency()} computes the contingency table between labels
#' \code{link[sits]{sits_cluster_frequency()}} computes the contingency
#' table between labels
#' and clusters and produces a matrix.
#' It needs as input a tibble produced by \code{sits_cluster_dendro()}.
#' Its input is a tibble produced by \code{link[sits]{sits_cluster_dendro()}}.
#'
#' \code{sits_cluster_clean()} takes a tibble with time series
#' that has an additional `cluster` produced by \code{sits_cluster_dendro()}
#' \code{link[sits]{sits_cluster_clean()}} takes a tibble with time series
#' that has an additional `cluster` produced by
#' \code{link[sits]{sits_cluster_dendro()}}
#' and removes labels that are minority in each cluster.
#'
#' @references "dtwclust" package (https://CRAN.R-project.org/package=dtwclust)
Expand Down Expand Up @@ -155,7 +157,7 @@ sits_cluster_dendro.default <- function(samples, ...) {
#' @author Rolf Simoes, \email{rolf.simoes@@inpe.br}
#' @param samples Tibble with input set of time series with additional
#' cluster information produced
#' by \code{sits::sits_cluster_dendro}.
#' by \code{link[sits]{sits_cluster_dendro}}.
#' @return A matrix containing frequencies
#' of labels in clusters.
#' @examples
Expand Down Expand Up @@ -185,12 +187,13 @@ sits_cluster_frequency <- function(samples) {
#' @name sits_cluster_clean
#' @author Rolf Simoes, \email{rolf.simoes@@inpe.br}
#' @description Takes a tibble with time series
#' that has an additional `cluster` produced by \code{sits_cluster_dendro()}
#' that has an additional `cluster` produced by
#' \code{link[sits]{sits_cluster_dendro()}}
#' and removes labels that are minority in each cluster.
#'
#' @param samples Tibble with set of time series with additional
#' cluster information produced
#' by \code{sits::sits_cluster_dendro()} (class "sits")
#' by \code{link[sits]{sits_cluster_dendro()}}
#' @return Tibble with time series (class "sits")
#' @examples
#' if (sits_run_examples()) {
Expand Down
4 changes: 0 additions & 4 deletions R/sits_combine_predictions.R
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,6 @@
#' The supported types of ensemble predictors are 'average' and
#' 'uncertainty'.
#'
#' @note
#' Please refer to the sits documentation available in
#' <https://e-sensing.github.io/sitsbook/> for detailed examples.
#'
#' @examples
#' if (sits_run_examples()) {
#' # create a data cube from local files
Expand Down
2 changes: 1 addition & 1 deletion R/sits_config.R
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
#' \code{SITS_CONFIG_USER_FILE} or as parameter to this function.
#'
#' To see the key entries and contents of the current configuration values,
#' use \code{sits_config_show()}.
#' use \code{link[sits]{sits_config_show()}}.
#'
#' @return Called for side effects
#'
Expand Down
Loading

0 comments on commit d0412e6

Please sign in to comment.