-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable Running Zig Benchmarks in zBench Without Caching #26
Comments
so that this doesn't get lost: re-posting my review comment as I think this is relevant here.
To summarize the github issue discussion: The language maintainers decided against benchmarks as part of the Zig testing framework. This might not be a totally final decision (Zig is still not v1.0), but I'd interpret it this way: at the moment, 3rd party benchmarking frameworks like With regard to caching; there are basically two options which seem ok to me:
|
Thank you for summarizing the discussion and the current stance of the Zig language maintainers. Regarding the benchmarking implementation, I've given it some thought and experimentation. While the idea of using The main function approach allows us greater control and simplicity in building and running benchmarks. It's more aligned with the conventional execution flow, making it easier to manage and understand, especially for new contributors or when integrating with other tools. I'm open to revisiting the I'm looking forward to your thoughts and any further suggestions you might have on this matter. |
Description
In the current setup of the zBench project, Zig tests are cached after their first successful execution. This behavior is standard for Zig, where tests are not re-run unless the tested code changes. However, for benchmarking purposes, we need the ability to run tests multiple times without relying on code modifications to trigger re-execution.
Issue
The caching mechanism in Zig's build system is efficient for many use cases but poses a challenge for continuous benchmarking. We need a method to run benchmarks in the zBench project repeatedly, irrespective of code changes, to obtain consistent and accurate performance measurements.
Current Workarounds
Proposed Solution
A more refined approach is needed to handle benchmarks in Zig, especially for projects like zBench that require repeated test executions for accurate performance assessment. Some potential solutions could include:
The text was updated successfully, but these errors were encountered: