-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Implement -ffast-math flag mapping to wasm-opt --fast-math (fixes #21497) #25513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Implement -ffast-math flag mapping to wasm-opt --fast-math (fixes #21497) #25513
Conversation
Auto-generated documentation update for the new FAST_MATH setting added in the previous commit.
- Fix indentation to use 4 spaces instead of 2 - Add proper docstrings to test methods - Remove trailing whitespace - Ensure consistent code style
Have you confirmed that you actually see a performance with in your program when the |
The 10-30% figure I cited comes from typical fast-math benefits in other compilers for FP-heavy workloads (dot products, transcendental functions, etc.) but the core value of this PR remains: it properly wires up the -ffast-math flag that users expect to work, addressing the specific request in #21497. The performance impact can then be measured empirically rather than assumed. |
Right, but we already support "typical fast-math benefits" I believe, since we already support the What this change does is add the Before land this we would want to show that it did have an actual benefit in real world programs. |
I'll create a benchmark that: |
So it looks like clang's fast-math gave you about 18% speedup and then wasm-opt's Can you confirm using https://github.com/sharkdp/hyperfine which handles doing multiple runs and takes into account warmup? @kripken WDYT? What is |
Binaryen's fast-math is trying to do the same as clang's, so I think it makes sense to connect the two. For example: There is some risk, though, in that these have not been heavily tested, and not fuzzed (they are hard to fuzz). About the benchmark, @devalgupta404 , that still seems like it might be noise. But there is a simple way to check: Please diff the wat text from those wasm files (using Binaryen's wasm-dis, then a normal diff on those). That would show us what exactly Binaryen is doing that LLVM did not. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file looks like AI slop. Did you use an LLM to generate this code?
https://discourse.llvm.org/t/rfc-llvm-ai-tool-policy-start-small-no-slop/88476 could also be relevant here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did use AI assistance for this PR, primarily for testing approach and understanding codebase structure. However core implementation changes were done manually by me based on my understanding of the codebase. Would you prefer I remove the test file and rewrite it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to add to what @kleisauke says, this test has zero value: It prints out promising-looking logging but does no actual testing. This is not something that makes sense to put in a test suite.
@sbc100 I disassembled both WASM binaries into WAT using Binaryen’s wasm-dis (v124) and diffed the text to see exactly what Binaryen changed relative to LLVM. The diff shows instruction level optimizations only in which Binaryen reassociates floating point adds/muls, reduces temporaries (some f64 temps become i32 scratch locals), and regroups repeated math calls to reduce redundancy; there’s also minor loop/counter restructuring. I don’t see any semantic changes, just different but equivalent instruction ordering and local usage. |
@devalgupta404 Please provide that diff. You can use a gist or pastebin if it's too big to fit here. |
Good luck |
@devalgupta404 Thanks, but can you either provide the raw files, or do a diff with context (
From the indentation there it is clear the |
Also, without whitespace, so |
https://gist.github.com/devalgupta404/a9d7d90c4f926e504d078b60e2d717bc @kripken Here's the diff in the exact format you requested (diff -U5): This shows the same optimizations but with the proper unified diff format and 5 lines of context that makes it much easier to read and understand the changes Binaryen applied. |
Hmm, that is still very hard to read. There seem to be extra differences, and also there is a blank line between each line of the diff? Anyhow, doing a test locally, here is the diff I see, which is what I was expecting: https://gist.github.com/kripken/407496f6bf1040618262c96c583d52f6 Those small useful changes are the kind of thing that wasm-opt can do in that mode. |
|
||
|
||
if __name__ == '__main__': | ||
unittest.main() No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than this type of test, I think we want something in test/test_other.py
. That test can
- Use EMCC_DEBUG to get logging that includes the wasm-opt command, and verify
--fast-math
is in there. See e.g.test_eval_ctors_debug_output
which does that. - Compare the wasm size with and without it, and see an improvement. See e.g.
test_jspi_code_size
which does a size comparison.
'--optimize-stack-ir'] | ||
if settings.FAST_MATH: | ||
opts.append('--fast-math') | ||
return opts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the wrong place for this: it is only sent into the very last binaryen tool invocation, as the comment says. We want to send this to every wasm-opt invocation, perhaps in run_wasm_opt
Implement -ffast-math flag mapping to wasm-opt --fast-math
Description
This PR implements the mapping from the
-ffast-math
compiler flag to thewasm-opt --fast-math
optimization flag, as requested in issue #21497.Changes Made
1. Added FAST_MATH Setting (
src/settings.js
)FAST_MATH
setting in the Tuning section with default value 0[link]
flag as it affects wasm-opt during linking2. Command Line Flag Handling (
tools/cmdline.py
)-ffast-math
flag to setFAST_MATH = 1
-Ofast
optimization level to also enable fast math (since-Ofast
typically includes-ffast-math
semantics)3. wasm-opt Integration (
tools/building.py
)get_last_binaryen_opts()
function to include--fast-math
flag whenFAST_MATH
setting is enabled--fast-math
flag whenFAST_MATH = 0
How It Works
-ffast-math
: Normal behavior, no--fast-math
flag passed to wasm-opt-ffast-math
: SetsFAST_MATH = 1
, causing wasm-opt to receive--fast-math
flag-Ofast
: Automatically enables fast math optimizations (standard behavior)