Skip to content

Commit

Permalink
finalize CRAN update
Browse files Browse the repository at this point in the history
  • Loading branch information
MaximilianPi committed Mar 17, 2024
1 parent e58516e commit 910d660
Show file tree
Hide file tree
Showing 11 changed files with 174 additions and 52 deletions.
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Authors@R:
role = "ctb",
email = "armin.schenk99@gmail.com")
)
Description: The 'cito' package provides a user-friendly interface for training and interpreting deep neural networks (DNN). 'cito' simplifies the fitting of DNNs by supporting the familiar formula syntax, hyper-parameter tuning under cross-validation, and helps to detect and handle convergence problems. DNNs can be trained on CPU, GPU and MacOS GPUs. In addition, 'cito' has many downstream functionalities such as various explainable AI metrics (e.g. variable importance, partial dependence plots, accumulated local effect plots, and effect estimates) to interpret trained DNNs. Finally, 'cito' optionally provides confidence intervals (and p-values) for all xAI metrics and predictions.
Description: The 'cito' package provides a user-friendly interface for training and interpreting deep neural networks (DNN). 'cito' simplifies the fitting of DNNs by supporting the familiar formula syntax, hyperparameter tuning under cross-validation, and helps to detect and handle convergence problems. DNNs can be trained on CPU, GPU and MacOS GPUs. In addition, 'cito' has many downstream functionalities such as various explainable AI (xAI) metrics (e.g. variable importance, partial dependence plots, accumulated local effect plots, and effect estimates) to interpret trained DNNs. 'cito' optionally provides confidence intervals (and p-values) for all xAI metrics and predictions. At the same time, 'cito' is computationally efficient because it is based on the deep learning framework 'torch'. The 'torch' package is native to R, so no Python installation or other API is required for this package.
Encoding: UTF-8
LazyData: true
Roxygen: list(markdown = TRUE)
Expand Down
2 changes: 1 addition & 1 deletion R/cito.R
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#' 'cito': Building and training neural networks
#'
#' 'cito' simplifies the building and training of (deep) neural networks by relying on standard R syntax and familiar methods from statistical packages. Model creation and training can be done with a single line of code. Furthermore, all generic R methods such as print or plot can be used on the fitted model. At the same time, 'cito' is computationally efficient because it is based on the deep learning framework 'torch' (with optional GPU support). The 'torch' package is native to R, so no Python installation or other API is required for this package.
#' The 'cito' package provides a user-friendly interface for training and interpreting deep neural networks (DNN). 'cito' simplifies the fitting of DNNs by supporting the familiar formula syntax, hyperparameter tuning under cross-validation, and helps to detect and handle convergence problems. DNNs can be trained on CPU, GPU and MacOS GPUs. In addition, 'cito' has many downstream functionalities such as various explainable AI (xAI) metrics (e.g. variable importance, partial dependence plots, accumulated local effect plots, and effect estimates) to interpret trained DNNs. 'cito' optionally provides confidence intervals (and p-values) for all xAI metrics and predictions. At the same time, 'cito' is computationally efficient because it is based on the deep learning framework 'torch'. The 'torch' package is native to R, so no Python installation or other API is required for this package.
#'
#' Cito is built around its main function \code{\link{dnn}}, which creates and trains a deep neural network. Various tools for analyzing the trained neural network are available.
#'
Expand Down
119 changes: 73 additions & 46 deletions R/cnn.R
Original file line number Diff line number Diff line change
Expand Up @@ -716,65 +716,81 @@ print.citoarchitecture <- function(x, input_shape, output_shape, ...) {
cat("-------------------------------------------------------------------------------\n")
}

print.linear <- function(layer, input_shape, ...) {
#' Print linear layer
#'
#' @param x an object of class linear
#' @param input_shape input shape
#' @param ... further arguments, not supported yet
print.linear <- function(x, input_shape, ...) {

cat("-------------------------------------------------------------------------------\n")
cat(paste0("Linear |Input: ", prod(input_shape), "\n"))
cat(paste0(" |Output: ", layer[["n_neurons"]], "\n"))
cat(paste0(" |Bias: ", layer[["bias"]], "\n"))
if(layer[["normalization"]]) {
cat(paste0(" |Output: ", x[["n_neurons"]], "\n"))
cat(paste0(" |Bias: ", x[["bias"]], "\n"))
if(x[["normalization"]]) {
cat(paste0(" |Batch normalization\n"))
}
cat(paste0(" |Activation: ", layer[["activation"]], "\n"))
if(layer[["dropout"]]>0) {
cat(paste0(" |Dropout: rate=", layer[["dropout"]], "\n"))
cat(paste0(" |Activation: ", x[["activation"]], "\n"))
if(x[["dropout"]]>0) {
cat(paste0(" |Dropout: rate=", x[["dropout"]], "\n"))
}

return(invisible(layer[["n_neurons"]]))
return(invisible(x[["n_neurons"]]))
}

print.conv <- function(layer, input_shape, ...) {
#' Print conv layer
#'
#' @param x an object of class conv
#' @param input_shape input shape
#' @param ... further arguments, not supported yet
print.conv <- function(x, input_shape, ...) {

output_shape <- get_output_shape(input_shape = input_shape,
n_kernels = layer[["n_kernels"]],
kernel_size = layer[["kernel_size"]],
stride = layer[["stride"]],
padding = layer[["padding"]],
dilation = layer[["dilation"]])
n_kernels = x[["n_kernels"]],
kernel_size = x[["kernel_size"]],
stride = x[["stride"]],
padding = x[["padding"]],
dilation = x[["dilation"]])

kernel_size <- paste(layer[["kernel_size"]], collapse = "x")
stride <- paste(layer[["stride"]], collapse = "x")
padding <- paste(layer[["padding"]], collapse = "x")
dilation <- paste(layer[["dilation"]], collapse = "x")
kernel_size <- paste(x[["kernel_size"]], collapse = "x")
stride <- paste(x[["stride"]], collapse = "x")
padding <- paste(x[["padding"]], collapse = "x")
dilation <- paste(x[["dilation"]], collapse = "x")

cat("-------------------------------------------------------------------------------\n")
cat(paste0("Convolution|Input: ", paste(input_shape, collapse = "x"), "\n"))
cat(paste0(" |Output: ", paste(output_shape, collapse = "x"), "\n"))
cat(paste0(" |Kernel: ", kernel_size, " (stride=", stride, ", padding=", padding, ", dilation=", dilation, ")\n"))
cat(paste0(" |Bias: ", layer[["bias"]], "\n"))
if(layer[["normalization"]]) {
cat(paste0(" |Bias: ", x[["bias"]], "\n"))
if(x[["normalization"]]) {
cat(paste0(" |Batch normalization\n"))
}
cat(paste0(" |Activation: ", layer[["activation"]], "\n"))
if(layer[["dropout"]]>0) {
cat(paste0(" |Dropout: rate=", layer[["dropout"]], "\n"))
cat(paste0(" |Activation: ", x[["activation"]], "\n"))
if(x[["dropout"]]>0) {
cat(paste0(" |Dropout: rate=", x[["dropout"]], "\n"))
}

return(invisible(output_shape))
}

print.avgPool <- function(layer, input_shape, ...) {

#' Print pooling layer
#'
#' @param x an object of class avgPool
#' @param input_shape input shape
#' @param ... further arguments, not supported yet
print.avgPool <- function(x, input_shape, ...) {

output_shape <- get_output_shape(input_shape = input_shape,
n_kernels = input_shape[1],
kernel_size = layer[["kernel_size"]],
stride = layer[["stride"]],
padding = layer[["padding"]],
kernel_size = x[["kernel_size"]],
stride = x[["stride"]],
padding = x[["padding"]],
dilation = rep(1,length(input_shape)-1))

kernel_size <- paste(layer[["kernel_size"]], collapse = "x")
stride <- paste(layer[["stride"]], collapse = "x")
padding <- paste(layer[["padding"]], collapse = "x")
kernel_size <- paste(x[["kernel_size"]], collapse = "x")
stride <- paste(x[["stride"]], collapse = "x")
padding <- paste(x[["padding"]], collapse = "x")

cat("-------------------------------------------------------------------------------\n")
cat(paste0("AvgPool |Input: ", paste(input_shape, collapse = "x"), "\n"))
Expand All @@ -784,19 +800,24 @@ print.avgPool <- function(layer, input_shape, ...) {
return(invisible(output_shape))
}

print.maxPool <- function(layer, input_shape, ...) {
#' Print pooling layer
#'
#' @param x an object of class maxPool
#' @param input_shape input shape
#' @param ... further arguments, not supported yet
print.maxPool <- function(x, input_shape, ...) {

output_shape <- get_output_shape(input_shape = input_shape,
n_kernels = input_shape[1],
kernel_size = layer[["kernel_size"]],
stride = layer[["stride"]],
padding = layer[["padding"]],
dilation = layer[["dilation"]])
kernel_size = x[["kernel_size"]],
stride = x[["stride"]],
padding = x[["padding"]],
dilation = x[["dilation"]])

kernel_size <- paste(layer[["kernel_size"]], collapse = "x")
stride <- paste(layer[["stride"]], collapse = "x")
padding <- paste(layer[["padding"]], collapse = "x")
dilation <- paste(layer[["dilation"]], collapse = "x")
kernel_size <- paste(x[["kernel_size"]], collapse = "x")
stride <- paste(x[["stride"]], collapse = "x")
padding <- paste(x[["padding"]], collapse = "x")
dilation <- paste(x[["dilation"]], collapse = "x")

cat("-------------------------------------------------------------------------------\n")
cat(paste0("MaxPool |Input: ", paste(input_shape, collapse = "x"), "\n"))
Expand All @@ -806,18 +827,25 @@ print.maxPool <- function(layer, input_shape, ...) {
return(invisible(output_shape))
}

print.transfer <- function(layer, input_shape, output_shape, ...) {

if(layer$replace_classifier) {
output_shape <- get_transfer_output_shape(layer$name)
#' Print transfer model
#'
#' @param x an object of class transfer
#' @param input_shape input shape
#' @param output_shape output shape
#' @param ... further arguments, not supported yet
print.transfer <- function(x, input_shape, output_shape, ...) {

if(x$replace_classifier) {
output_shape <- get_transfer_output_shape(x$name)
}

cat("-------------------------------------------------------------------------------\n")
cat(paste0("Transfer |Input: ", paste(input_shape, collapse = "x"), "\n"))
cat(paste0(" |Output: ", paste(output_shape, collapse = "x"), "\n"))
cat(paste0(" |Network: ", layer[["name"]] , "\n"))
cat(paste0(" |Pretrained: ", layer[["pretrained"]] , "\n"))
if(layer[["pretrained"]]) cat(paste0(" |Weights frozen: ", layer[["freeze"]] , "\n"))
cat(paste0(" |Network: ", x[["name"]] , "\n"))
cat(paste0(" |Pretrained: ", x[["pretrained"]] , "\n"))
if(x[["pretrained"]]) cat(paste0(" |Weights frozen: ", x[["freeze"]] , "\n"))

return(invisible(output_shape))
}
Expand All @@ -830,7 +858,6 @@ print.transfer <- function(layer, input_shape, output_shape, ...) {
#' @param output_shape the number of nodes in the output layer
#' @param ... additional arguments
#' @return nothing
#'
plot.citoarchitecture <- function(x, input_shape, output_shape, ...) {
x <- adjust_architecture(x, length(input_shape)-1)

Expand Down
3 changes: 3 additions & 0 deletions cran-comments.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Version 1.1
## cito submission 1 2024/03/17
This is a major update. The update includes new loss/likelihood functions,
hyperparameter tuning, bug fixes, and improved documentation.

Expand All @@ -19,6 +20,8 @@ Github actions:
Win-builder
* R-release, R-development, and R-oldrelease

Spelling has been checked (potential errors are intended).



# Version 1.0.2
Expand Down
2 changes: 1 addition & 1 deletion man/cito.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

18 changes: 18 additions & 0 deletions man/print.avgPool.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

18 changes: 18 additions & 0 deletions man/print.conv.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

18 changes: 18 additions & 0 deletions man/print.linear.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

18 changes: 18 additions & 0 deletions man/print.maxPool.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

20 changes: 20 additions & 0 deletions man/print.transfer.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit 910d660

Please sign in to comment.