Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements to DynamicPPLBenchmarks #346

Draft
wants to merge 27 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
57b5d47
bigboy update to benchmarks
torfjelde Aug 2, 2021
e7c0a76
Merge branch 'master' into tor/benchmark-update
torfjelde Aug 19, 2021
60ec2c8
Merge branch 'master' into tor/benchmark-update
torfjelde Sep 8, 2021
eb1b83c
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 6, 2021
d8afa71
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 6, 2021
5bb48d2
make models return random variables as NamedTuple as it can be useful…
torfjelde Dec 2, 2021
02484cf
add benchmarking of evaluation with SimpleVarInfo with NamedTuple
torfjelde Dec 2, 2021
5c59769
added some information about the execution environment
torfjelde Dec 3, 2021
f1f1381
added judgementtable_single
torfjelde Dec 3, 2021
a48553a
added benchmarking of SimpleVarInfo, if present
torfjelde Dec 3, 2021
f2dc062
Merge branch 'master' into tor/benchmark-update
torfjelde Dec 3, 2021
fa675de
added ComponentArrays benchmarking for SimpleVarInfo
torfjelde Dec 5, 2021
3962da2
Merge branch 'master' into tor/benchmark-update
yebai Aug 29, 2022
53dc571
Merge branch 'master' into tor/benchmark-update
yebai Nov 2, 2022
f5705d5
Merge branch 'master' into tor/benchmark-update
torfjelde Nov 7, 2022
7f569f7
formatting
torfjelde Nov 7, 2022
4a06150
Merge branch 'master' into tor/benchmark-update
yebai Feb 2, 2023
a1cc6bf
Apply suggestions from code review
yebai Feb 2, 2023
3e7e200
Update benchmarks/benchmarks.jmd
yebai Feb 2, 2023
c867ae8
Merge branch 'master' into tor/benchmark-update
yebai Jul 4, 2023
96f120b
merged main into this one
shravanngoswamii Dec 19, 2024
0460b64
Benchmarking CI
shravanngoswamii Dec 19, 2024
a8541b5
Julia script for benchmarking on top of current setup
shravanngoswamii Feb 1, 2025
0291c2f
keep old results for reference
shravanngoswamii Feb 1, 2025
6f255d1
Merge branch 'master' of https://github.com/TuringLang/DynamicPPL.jl …
shravanngoswamii Feb 1, 2025
3b5e448
updated benchmarking setup
shravanngoswamii Feb 20, 2025
1e61025
Merge branch 'master' of https://github.com/TuringLang/DynamicPPL.jl …
shravanngoswamii Feb 20, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions .github/workflows/Benchmarking.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Benchmarking

on:
push:
branches:
- master

jobs:
benchmark:
runs-on: ubuntu-latest

steps:
- name: Checkout Repository
uses: actions/checkout@v4

- name: Set up Julia
uses: julia-actions/setup-julia@v2
with:
version: '1'

- name: Install Dependencies
run: julia --project=benchmarks/ -e 'using Pkg; Pkg.instantiate()'

- name: Run Benchmarks and Generate Reports
run: julia --project=benchmarks/ -e 'using DynamicPPLBenchmarks; weave_benchmarks()'

- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./benchmarks/results
publish_branch: gh-pages
7 changes: 2 additions & 5 deletions benchmarks/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,7 @@ version = "0.1.0"

[deps]
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
DiffUtils = "8294860b-85a6-42f8-8c35-d911f667b5f6"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
DynamicPPL = "366bfd00-2699-11ea-058f-f148b4cae6d8"
LibGit2 = "76f85450-5226-5b5a-8eaa-529ad045b433"
Markdown = "d6f4376e-aef5-505a-96c1-9c027394607a"
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
Weave = "44d3d7a6-8a23-5bf8-98c5-b353f8df5ec9"
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
TuringBenchmarking = "0db1332d-5c25-4deb-809f-459bc696f94f"
49 changes: 0 additions & 49 deletions benchmarks/benchmark_body.jmd

This file was deleted.

87 changes: 87 additions & 0 deletions benchmarks/benchmarks.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
using DynamicPPL
using DynamicPPLBenchmarks
using BenchmarkTools
using TuringBenchmarking
using Distributions
using PrettyTables
Comment on lines +1 to +6
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are trying to move away from unqualified using X statements in TuringLang (see TuringLang/Turing.jl#2288). Could these be replaced with either using X: X, which then forces to qualify the use of the module later as X.foo, or with using X: foo if only one or two names need to be imported from X?


# Define models
@model function demo1(x)
m ~ Normal()
x ~ Normal(m, 1)
return (m = m, x = x)
end

@model function demo2(y)
p ~ Beta(1, 1)
N = length(y)
for n in 1:N
y[n] ~ Bernoulli(p)
end
return (; p)
end

demo1_data = randn()
demo2_data = rand(Bool, 10)

# Create model instances with the data
demo1_instance = demo1(demo1_data)
demo2_instance = demo2(demo2_data)

# Define available AD backends
available_ad_backends = Dict(
:forwarddiff => :forwarddiff,
:reversediff => :reversediff,
:zygote => :zygote
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems unnecessary and unused.


# Define available VarInfo types.
# Each entry is (Name, function to produce the VarInfo)
available_varinfo_types = Dict(
:untyped => ("UntypedVarInfo", VarInfo),
:typed => ("TypedVarInfo", m -> VarInfo(m)),
:simple_namedtuple => ("SimpleVarInfo (NamedTuple)", m -> SimpleVarInfo{Float64}(m())),
:simple_dict => ("SimpleVarInfo (Dict)", m -> begin
retvals = m()
varnames = map(keys(retvals)) do k
VarName{k}()
end
SimpleVarInfo{Float64}(Dict(zip(varnames, values(retvals))))
end)
)
Comment on lines +38 to +51
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems unnecessary and unused.


# Specify the combinations to test:
# (Model Name, model instance, VarInfo choice, AD backend)
chosen_combinations = [
("Demo1", demo1_instance, :typed, :forwarddiff),
("Demo1", demo1_instance, :simple_namedtuple, :zygote),
("Demo2", demo2_instance, :untyped, :reversediff),
("Demo2", demo2_instance, :simple_dict, :forwarddiff)
]

# Store results as tuples: (Model, AD Backend, VarInfo Type, Eval Time, AD Eval Time)
results_table = Tuple{String, String, String, Float64, Float64}[]

for (model_name, model, varinfo_choice, adbackend) in chosen_combinations
suite = make_suite(model, varinfo_choice, adbackend)
results = run(suite)
eval_time = median(results["evaluation"]).time
ad_eval_time = median(results["AD_Benchmarking"]["evaluation"]["standard"]).time
Comment on lines +68 to +69
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

results["AD_Benchmarking"]["evaluation"]["standard"] is actually getting the time for just the plain model evaluation without gradient, so very similar to the results["evaluation"] one. I think you want results["AD_Benchmarking"]["gradient"]["standard"] and results["AD_Benchmarking"]["evaluation"]["standard"]. See also a comment I left in DynamicPPLBenchmarks.jl that relates to this.

push!(results_table, (model_name, string(adbackend), string(varinfo_choice), eval_time, ad_eval_time))
end

# Convert results to a 2D array for PrettyTables
function to_matrix(tuples::Vector{<:NTuple{5,Any}})
n = length(tuples)
data = Array{Any}(undef, n, 5)
for i in 1:n
for j in 1:5
data[i, j] = tuples[i][j]
end
end
return data
end

table_matrix = to_matrix(results_table)
Comment on lines +73 to +85
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be simplified into

table_matrix = hcat(Iterators.map(collect, zip(results_table...))...)

You could also skip the Iterators.map(collect, blah) part if in the earlier loop you made the elements of results_table be vectors rather than tuples, although I appreciate the neatness of having them be tuples. Or you could have results_table be an Array{Any, 2}(undef, length(chosen_combinations), 5) from the start. There are a few ways to simplify this, I might not have thought of the simplest way, feel free to pick your favourite.

header = ["Model", "AD Backend", "VarInfo Type", "Evaluation Time (ns)", "AD Eval Time (ns)"]
pretty_table(table_matrix; header=header, tf=PrettyTables.tf_markdown)
130 changes: 0 additions & 130 deletions benchmarks/benchmarks.jmd

This file was deleted.

Loading
Loading