Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix a few typos #142

Merged
merged 8 commits into from
Aug 3, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/src/contributing/new-release.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@ In order to start the release process a person with the associated permissions s

![Release comment](../assets/img/release_comment.png)

The Julia Registrator bot should automaticallly register a request for the new release. Once all checks have passed on the Julia Registrator's side, the new release will be published and tagged automatically.
The Julia Registrator bot should automatically register a request for the new release. Once all checks have passed on the Julia Registrator's side, the new release will be published and tagged automatically.
4 changes: 2 additions & 2 deletions docs/src/manuals/custom-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ result_mybernoulli = inference(
)
```

We have now completed our experiment and obtained the posterior marginal distribution for p through inference. To evaluate the performance of our inference, we can compare the estimated posterior to the true value. In our experiment, the true value for p is 0.75, and we can see that the estimated posterior has a mean of approximately 0.713, which shows that our custom node was able to succesfully pass messages towards the `π` variable in order to learn the true value of the parameter.
We have now completed our experiment and obtained the posterior marginal distribution for p through inference. To evaluate the performance of our inference, we can compare the estimated posterior to the true value. In our experiment, the true value for p is 0.75, and we can see that the estimated posterior has a mean of approximately 0.713, which shows that our custom node was able to successfully pass messages towards the `π` variable in order to learn the true value of the parameter.

```@example create-node
using Plots
Expand Down Expand Up @@ -328,4 +328,4 @@ end
nothing # hide
```

Congratulations! You have succesfully implemented your own custom node in `RxInfer`. We went through the definition of a node to the implementation of the update rules and marginal posterior calculations. Finally we tested our custom node in a model and checked if we implemented everything correctly.
Congratulations! You have successfully implemented your own custom node in `RxInfer`. We went through the definition of a node to the implementation of the update rules and marginal posterior calculations. Finally we tested our custom node in a model and checked if we implemented everything correctly.
2 changes: 1 addition & 1 deletion docs/src/manuals/model-specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ For some factor nodes we rely on the syntax from `Distributions.jl` to make it e
To quickly check the list of all available factor nodes that can be used in the model specification language call `?make_node` or `Base.doc(make_node)`.


Specifically for the Gaussian/Normal case we have custom implementations that yield a higher computational efficiency and improved stability in comparison to `Distributions.jl` as these are optimized for sampling operations. Our aliases for these distributions therefore do not correspond to the implementations from `Distributions.jl`. However, our model specifciation language is compatible with syntax from `Distributions.jl` for normal distributions, which will be automatically converted. `RxInfer` has its own implementation because of the following 3 reasons:
Specifically for the Gaussian/Normal case we have custom implementations that yield a higher computational efficiency and improved stability in comparison to `Distributions.jl` as these are optimized for sampling operations. Our aliases for these distributions therefore do not correspond to the implementations from `Distributions.jl`. However, our model specification language is compatible with syntax from `Distributions.jl` for normal distributions, which will be automatically converted. `RxInfer` has its own implementation because of the following 3 reasons:
1. `Distributions.jl` constructs normal distributions by saving the corresponding covariance matrices in a `PDMat` object from `PDMats.jl`. This construction always computes the Cholesky decompositions of the covariance matrices, which is very convenient for sampling-based procedures. However, in `RxInfer.jl` we mostly base our computations on analytical expressions which do not always need to compute the Cholesky decomposition. In order to reduce the overhead that `Distributions.jl` introduces, we therefore have custom implementations.
2. Depending on the update rules, we might favor different parameterizations of the normal distributions. `ReactiveMP.jl` has quite a variety in parameterizations that allow us to efficient computations where we convert between parameterizations as little as possible.
3. In certain situations we value stability a lot, especially when inverting matrices. `PDMats.jl`, and hence `Distributions.jl`, is not capable to fulfill all needs that we have here. Therefore we use `PositiveFactorizations.jl` to cope with the corner-cases.
Expand Down
4 changes: 2 additions & 2 deletions scripts/examples.jl
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ function Base.run(examplesrunner::ExamplesRunner)
push!(examplesrunner.runner_tasks, task)
end

# For each remotelly called task we `fetch` its result or save an exception
# For each remotely called task we `fetch` its result or save an exception
foreach(fetch, examplesrunner.runner_tasks)

# If exception are not empty we notify the user and force-fail
Expand Down Expand Up @@ -182,7 +182,7 @@ function Base.run(examplesrunner::ExamplesRunner)
mdtext = read(mdpath, String)

# Check if example failed with an error
# TODO: we might have better heurstic here? But I couldn't find a way to tell `Weave.jl` if an error has occured
# TODO: we might have better heuristic here? But I couldn't find a way to tell `Weave.jl` if an error has occurred
# TODO: try to improve this later
erroridx = findnext("```\nError:", mdtext, 1)
if !isnothing(erroridx)
Expand Down
2 changes: 1 addition & 1 deletion src/graphppl.jl
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ function write_pipeline_stage(fform, stage)
elseif @capture(stage, RequireMarginal(args__))
return :(ReactiveMP.RequireMarginalFunctionalDependencies($indices, $initials))
else
error("Unreacheable reached in `write_pipeline_stage`.")
error("Unreachable reached in `write_pipeline_stage`.")
end
else
return stage
Expand Down
10 changes: 5 additions & 5 deletions src/inference.jl
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ function __inference_process_error(err::StackOverflowError, rethrow)
Stack overflow error occurred during the inference procedure.
The inference engine may execute message update rules recursively, hence, the model graph size might be causing this error.
To resolve this issue, try using `limit_stack_depth` inference option for model creation. See `?inference` documentation for more details.
The `limit_stack_depth` option does not help against over stack overflow errors that might hapenning outside of the model creation or message update rules execution.
The `limit_stack_depth` option does not help against over stack overflow errors that might happening outside of the model creation or message update rules execution.
"""
if rethrow
Base.rethrow(err) # Shows the original stack trace
Expand Down Expand Up @@ -181,7 +181,7 @@ This structure is used as a return value from the [`inference`](@ref) function.
- `free_energy`: (optional) An array of Bethe Free Energy values per VMP iteration. See the `free_energy` argument for [`inference`](@ref).
- `model`: `FactorGraphModel` object reference.
- `returnval`: Return value from executed `@model`.
- `error`: (optional) A reference to an exception, that might have occured during the inference. See the `catch_exception` argument for [`inference`](@ref).
- `error`: (optional) A reference to an exception, that might have occurred during the inference. See the `catch_exception` argument for [`inference`](@ref).

See also: [`inference`](@ref)
"""
Expand Down Expand Up @@ -218,7 +218,7 @@ function Base.show(io::IO, result::InferenceResult)
if iserror(result)
print(
io,
"[ WARN ] An error has occured during the inference procedure. The result might not be complete. You can use the `.error` field to access the error and its backtrace. Use `Base.showerror` function to display the error."
"[ WARN ] An error has occurred during the inference procedure. The result might not be complete. You can use the `.error` field to access the error and its backtrace. Use `Base.showerror` function to display the error."
)
end
end
Expand Down Expand Up @@ -466,10 +466,10 @@ If the `addons` argument has been used, automatically changes the default strate

- ### `catch_exception`

The `catch_exception` keyword argument specifies whether exceptions during the inference procedure should be catched in the `error` field of the
The `catch_exception` keyword argument specifies whether exceptions during the inference procedure should be caught in the `error` field of the
result. By default, if exception occurs during the inference procedure the result will be lost. Set `catch_exception = true` to obtain partial result
for the inference in case if an exception occurs. Use `RxInfer.issuccess` and `RxInfer.iserror` function to check if the inference completed successfully or failed.
If an error occurs, the `error` field will store a tuple, where first element is the exception itself and the second element is the catched `backtrace`. Use the `stacktrace` function
If an error occurs, the `error` field will store a tuple, where first element is the exception itself and the second element is the caught `backtrace`. Use the `stacktrace` function
with the `backtrace` as an argument to recover the stacktrace of the error. Use `Base.showerror` function to display
the error.

Expand Down
4 changes: 2 additions & 2 deletions test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -119,11 +119,11 @@ function Base.run(testrunner::TestRunner)
end
return nothing
end
# We save the created task for later syncronization
# We save the created task for later synchronization
push!(testrunner.test_tasks, task)
end

# For each remotelly called task we `fetch` its result or save an exception
# For each remotely called task we `fetch` its result or save an exception
foreach(fetch, testrunner.test_tasks)

# If exception are not empty we notify the user and force-fail
Expand Down
Loading