Skip to content

Commit

Permalink
docs: update broken links and fail on linkcheck
Browse files Browse the repository at this point in the history
  • Loading branch information
avik-pal committed Jul 28, 2024
1 parent 8c630e4 commit 99db0f4
Show file tree
Hide file tree
Showing 9 changed files with 17 additions and 16 deletions.
1 change: 0 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,6 @@ makedocs(; sitename="Lux.jl Docs",
repo="github.com/LuxDL/Lux.jl", devbranch="main", devurl="dev",
deploy_url="https://lux.csail.mit.edu", deploy_decision),
draft=false,
warnonly=:linkcheck, # Lately it has been failing quite a lot but those links are actually fine
pages)

deploydocs(; repo="github.com/LuxDL/Lux.jl.git",
Expand Down
2 changes: 1 addition & 1 deletion docs/src/manual/autodiff.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Lux. Additionally, we provide some convenience functions for working with AD.
| [`ForwardDiff.jl`](https://github.com/JuliaDiff/ForwardDiff.jl) | Forward | ✔️ | ✔️ | ✔️ | Tier I |
| [`ReverseDiff.jl`](https://github.com/JuliaDiff/ReverseDiff.jl) | Reverse | ✔️ ||| Tier II |
| [`Tracker.jl`](https://github.com/FluxML/Tracker.jl) | Reverse | ✔️ | ✔️ || Tier II |
| [`Tapir.jl`](https://github.com/withbayes/Tapir.jl) | Reverse |[^q] ||| Tier III |
| [`Tapir.jl`](https://github.com/compintell/Tapir.jl) | Reverse |[^q] ||| Tier III |
| [`Diffractor.jl`](https://github.com/JuliaDiff/Diffractor.jl) | Forward |[^q] |[^q] |[^q] | Tier III |

[^e]: Currently Enzyme outperforms other AD packages in terms of CPU performance. However,
Expand Down
6 changes: 3 additions & 3 deletions docs/src/manual/nested_autodiff.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,9 +192,9 @@ nothing; # hide

Hutchinson Trace Estimation often shows up in machine learning literature to provide a fast
estimate of the trace of a Jacobian Matrix. This is based off of
[Hutchinson 1990](https://www.researchgate.net/publication/243668757_A_Stochastic_Estimator_of_the_Trace_of_the_Influence_Matrix_for_Laplacian_Smoothing_Splines) which
computes the estimated trace of a matrix ``A \in \mathbb{R}^{D \times D}`` using random
vectors ``v \in \mathbb{R}^{D}`` s.t. ``\mathbb{E}\left[v v^T\right] = I``.
[Hutchinson 1990](https://www.nowozin.net/sebastian/blog/thoughts-on-trace-estimation-in-deep-learning.html)
which computes the estimated trace of a matrix ``A \in \mathbb{R}^{D \times D}`` using
random vectors ``v \in \mathbb{R}^{D}`` s.t. ``\mathbb{E}\left[v v^T\right] = I``.

```math
\text{Tr}(A) = \mathbb{E}\left[v^T A v\right] = \frac{1}{V} \sum_{i = 1}^V v_i^T A v_i
Expand Down
2 changes: 1 addition & 1 deletion docs/src/manual/performance_pitfalls.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,4 +67,4 @@ GPUArraysCore.allowscalar(false)
`Lux.jl` is integrated with `DispatchDoctor.jl` to catch type instabilities. You can easily
enable it by setting the `instability_check` preference. This will help you catch type
instabilities in your code. For more information on how to set preferences, check out
[`set_dispatch_doctor_preferences`](@ref).
[`Lux.set_dispatch_doctor_preferences!`](@ref).
4 changes: 2 additions & 2 deletions docs/src/manual/preferences.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ By default, both of these preferences are set to `false`.
## [Dispatch Doctor](@id dispatch-doctor-preference)

1. `instability_check` - Preference controlling the dispatch doctor. See the documentation
on [`set_dispatch_doctor_preferences!`](@ref) for more details. The preferences need to
be set for `LuxCore` and `LuxLib` packages. Both of them default to `disable`.
on [`Lux.set_dispatch_doctor_preferences!`](@ref) for more details. The preferences need
to be set for `LuxCore` and `LuxLib` packages. Both of them default to `disable`.
- Setting the `LuxCore` preference sets the check at the level of `LuxCore.apply`. This
essentially activates the dispatch doctor for all Lux layers.
- Setting the `LuxLib` preference sets the check at the level of functional layer of
Expand Down
2 changes: 1 addition & 1 deletion examples/Basics/main.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# This is a quick intro to [Lux](https://github.com/LuxDL/Lux.jl) loosely based on:
#
# 1. [PyTorch's tutorial](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html).
# 2. [Flux's tutorial](https://fluxml.ai/Flux.jl/stable/tutorials/2020-09-15-deep-learning-flux/).
# 2. Flux's tutorial (the link for which has now been lost to abyss).
# 3. [Jax's tutorial](https://jax.readthedocs.io/en/latest/jax-101/index.html).
#
# It introduces basic Julia programming, as well `Zygote`, a source-to-source automatic
Expand Down
10 changes: 6 additions & 4 deletions examples/BayesianNN/main.jl
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
# # Bayesian Neural Network

# We borrow this tutorial from the
# [official Turing Docs](https://turinglang.org/stable/tutorials/03-bayesian-neural-network/). We
# will show how the explicit parameterization of Lux enables first-class composability with
# packages which expect flattened out parameter vectors.
# [official Turing Docs](https://turinglang.org/docs/tutorials/03-bayesian-neural-network/index.html).
# We will show how the explicit parameterization of Lux enables first-class composability
# with packages which expect flattened out parameter vectors.

# We will use [Turing.jl](https://turinglang.org/stable/) with [Lux.jl](https://lux.csail.mit.edu/)
# Note: The tutorial in the official Turing docs is now using Lux instead of Flux.

# We will use [Turing.jl](https://turinglang.org/) with [Lux.jl](https://lux.csail.mit.edu/)
# to implement implementing a classification algorithm. Lets start by importing the relevant
# libraries.

Expand Down
4 changes: 2 additions & 2 deletions examples/SymbolicOptimalControl/main.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

# This tutorial is based on [SciMLSensitivity.jl tutorial](https://docs.sciml.ai/SciMLSensitivity/stable/examples/optimal_control/optimal_control/).
# Instead of using a classical NN architecture, here we will combine the NN with a symbolic
# expression from [DynamicExpressions.jl](https://symbolicml.org/DynamicExpressions.jl) (the
# symbolic engine behind [SymbolicRegression.jl](https://astroautomata.com/SymbolicRegression.jl)
# expression from [DynamicExpressions.jl](https://symbolicml.org/DynamicExpressions.jl/) (the
# symbolic engine behind [SymbolicRegression.jl](https://astroautomata.com/SymbolicRegression.jl/)
# and [PySR](https://github.com/MilesCranmer/PySR/)).

# Here we will solve a classic optimal control problem with a universal differential
Expand Down
2 changes: 1 addition & 1 deletion src/helpers/losses.jl
Original file line number Diff line number Diff line change
Expand Up @@ -595,7 +595,7 @@ true
## Special Note
This function takes any of the
[`LossFunctions.jl`](https://juliaml.github.io/LossFunctions.jl/stable) public functions
[`LossFunctions.jl`](https://juliaml.github.io/LossFunctions.jl/stable/) public functions
into the Lux Losses API with efficient aggregation.
"""
@concrete struct GenericLossFunction <: AbstractLossFunction
Expand Down

0 comments on commit 99db0f4

Please sign in to comment.