You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently nim relies on very coarse performance benchmarks, see tests with timeout; this doesn't help with tracking over time (or git commit history for both compiler and stdlib) whether performance increases or decreases, it only helps identify when some test times out using a very coarse threshold (and in practice, these simply tend to be adjusted when threshold fails, see tests/vm/tslow_tables.nim). And in fact, performance of tables has regressed recently.
The goal of this RFC is to add a suite of reusable utilities (eg a std/benchmarks library) and individual benchmarks to enable precise tracking (and graphing) of performance over time (ie, git commit history), and report results as json, which can then be rendered in a subsequent time as plots (reporting both overall score and granular scores per-benchmark/compilation options/commit hash/commit time).
desired features
set of micro (eg: single function eg addInt) and macro (eg: more complex program) benchmarks to measure performance
a single / few tweakable parameters (eg --define:expectedDuration:numSeconds, --define:expectedMemory:numBytes) can be used to adjust each benchmark so that they run fast/with limited RAM (running in CI requires a budget) or can take longer/require more RAM and be more accurate (eg multiple restarts) to be run locally or on a dedicated VM
plotting tool: takes a json of benchmark results and plots them
example benchmarks
compilation speed for a set of fixed programs (taking care of using --skipparentcfg --skipusercfg to avoid external factors introducing variability), including large ones like compiler/nim.nim (with or without --forceBuild)
Currently nim relies on very coarse performance benchmarks, see tests with
timeout
; this doesn't help with tracking over time (or git commit history for both compiler and stdlib) whether performance increases or decreases, it only helps identify when some test times out using a very coarse threshold (and in practice, these simply tend to be adjusted when threshold fails, seetests/vm/tslow_tables.nim
). And in fact, performance of tables has regressed recently.The goal of this RFC is to add a suite of reusable utilities (eg a
std/benchmarks
library) and individual benchmarks to enable precise tracking (and graphing) of performance over time (ie, git commit history), and report results as json, which can then be rendered in a subsequent time as plots (reporting both overall score and granular scores per-benchmark/compilation options/commit hash/commit time).desired features
addInt
) and macro (eg: more complex program) benchmarks to measure performance--define:expectedDuration:numSeconds
,--define:expectedMemory:numBytes
) can be used to adjust each benchmark so that they run fast/with limited RAM (running in CI requires a budget) or can take longer/require more RAM and be more accurate (eg multiple restarts) to be run locally or on a dedicated VMexample benchmarks
compiler/nim.nim
(with or without--forceBuild
)links
Monitoring compilation speed for each commit; Running on an AWS t2.micro instance (1 vCPU).
The text was updated successfully, but these errors were encountered: