-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark Functions #1010
Comments
Here is something in userland that may provide some ideas:
This outputs:
Going from here, the following may be useful:
I would think that if you wanted to ensure you didn't regress on a benchmark, you would write this as a test and call some Also, right now you can write this in a test and get something that kind of works okay (bar formatting and probably some other things).
|
For the purpose, it may be more useful to extend the tests:
and smart test runner, which will repeat slower-than-expected tests couple of times to make sure it is not just a fluke. I have implemented such a thing for my test system in C. It also increases process priority and makes time measurement more precise (using Win32 tricks). Practical benchmarking support would need to store the data and a also tooling to show trends over time, to be useful. Also, any form of benchmarking would benefit from ability to mock non-interesting functions, as proposed e.g. in #500. |
Btw, tests could be extended to ensure multiple dynamic properties of code:
In limited way these things could be checked statically, but dynamical checking covers more situations. Mocking could be used to intercept and verify I/O and TCP/IP calls. |
There is one reason to potentially have benchmarks in the language rather than userland and that is Profile Guided Optimization. Idea being that you could create benchmarks that are automatically run in release modes and used for PGO. You would not want regular tests to be used for PGO because regular tests test edge cases by design which is directly in conflict with optimizing for the common case. |
@andrewrk This can be implemented using the same way:
Customizable test runner was once proposed in #567. Edit: potential expansion of scope of the tests (perhaps Edit2: Benchmarks intended for automatic checking of performance regressions can be also implemented this way. Specialized test runner would recognize tests to be checked, will record their times somewhere, and then can warn if regression happens. |
Some thoughts:
Some projects for inspiration: C/C++ Haskel Nim Rust |
More info on this decision: |
Is there any decent library for benchmarking as a way to go? Maybe turn https://github.com/ziglang/gotta-go-fast to general-purpose solution? |
This comment was marked as off-topic.
This comment was marked as off-topic.
No you're not hearing that. Please refrain from asking nonsensical rhetorical questions on closed issues. All discussions on the issue tracker must be focused and effortful. |
I agree that benchmarking is hard, but testing is hard too. We still have tests in the language, because it makes easier to work with the language using its toolchain (zig build, zig run, zig test) and it's nice to have unit tests near code. It sounds nice to use zig to run benchmarks and to have micro benchmarks near code too. My point is, like in-language testing encourage people to write tests, in-language benchmarking will encourage people to make their software faster. |
Similar to how
test "MyTest" { ... }
works maybe there should be a benchmark one. It would run your code a set amount of times determined by a build option maybe? Or perhaps would run till it got the standard deviation down to a certain point similar to how a lot of benchmarkers work. Would then print out the time it took along with a few statistics, and perhaps you could even assert that the time has to be less than a certain value. Something like;We could also maybe support parallelisation through something like;
I'm not 100% sure on the syntax/use, but I think the idea is definitely strong.
Use case
For easy benchmarking of functions. As simple as that.
The text was updated successfully, but these errors were encountered: