Skip to content

Commit

Permalink
Updated the docs
Browse files Browse the repository at this point in the history
  • Loading branch information
prithagupta committed Aug 15, 2024
1 parent 8f248ea commit 4548307
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion autoqild/dataset_readers/synthetic_data_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -486,7 +486,7 @@ def bayes_predictor_pc_softmax_mi(self):
- \( z_k \) is the logit or raw score for class \( k \).
- \( K \) is the total number of classes.
PC-Softmax Function:
.. math::
Expand Down
14 changes: 7 additions & 7 deletions autoqild/mi_estimators/pc_softmax_estimator.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,13 @@ class PCSoftmaxMIEstimator(MIEstimatorBase):
Optimizer type to use for training the neural network.
Must be one of:
- `RMSprop`: RMSprop optimizer.
- `sgd`: Stochastic Gradient Descent optimizer.
- `adam`: Adam optimizer.
- `AdamW`: AdamW optimizer.
- `Adagrad`: Adagrad optimizer.
- `Adamax`: Adamax optimizer.
- `Adadelta`: Adadelta optimizer.
- `RMSprop`: Root Mean Square Propagation, an adaptive learning rate method.
- `sgd`: Stochastic Gradient Descent, a simple and widely-used optimizer.
- "adam": Adaptive Moment Estimation, combining momentum and RMSProp for better convergence.
- `AdamW`: Adam with weight decay, an improved variant of Adam with better regularization.
- `Adagrad`: Adaptive Gradient Algorithm, adjusting the learning rate based on feature frequency.
- `Adamax`: Variant of Adam based on infinity norm, more robust with sparse gradients.
- `Adadelta`: An extension of Adagrad that seeks to reduce its aggressive learning rate decay.
learning_rate : float, optional, default=0.001
Learning rate for the optimizer.
Expand Down

0 comments on commit 4548307

Please sign in to comment.