-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: measure fuel usage and timing stats #465
Conversation
Rude to remove me as an author @Kubuxu 🙃🤣 |
It wasn't on purpose 🙃 no idea why it picked up steb instead of you as the author. |
Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai> feat: expose machine from executor via deref Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai> fvm: more stats Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai> feat: track "real" compute gas feat: time module compile/link/instantiate time detailed gas and fuel tracing: initial implementation. Introduces a new Cargo feature `tracing` that enables detailed gas tracing. Traces are accumulated in the GasTracer, owned by the CallManager. Gas traces specify the context, point of tracing, consumption stats, and timing stats. We currently support these tracing points: - {Pre,Post}Syscall - {Pre,Post}Extern - {Enter,Exit}Call - Start (of the call stack) - Finish (of the call stack) Traces are currently serialized to JSON and printed directly on stdout. There's a bit of duplication the syscall binding macros that I'm not happy about, and perhaps we could make the tracepoints in the CallManager a bit DRYer. fix build. Trace to file Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai>
Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai>
99d919c
to
98b5623
Compare
Fixed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This tracing framework is useful to inspect running logic in detail, how it interacts with the environment, debugging execution flow, understand the instruction complexity of ranges, and get a relative sense of where time is spent.
However, this should come with a massive warning sign: the timing numbers are considerably skewed by several factors:
- Non-negligible overhead of getting time from the system.
- According to minstant benchmarks,
std::time::Instant::now()
can have an overhead of as much as 30ns on the specified platform. - Using the TSC is a solution, but even with minstant there's an overhead of 10ns which needs to be accounted for.
- Potential solution: correct for overhead by benchmarking getting time, and applying a negative offset to all duration readings.
- According to minstant benchmarks,
- Additional instructions + cache thrashing due to the tracing logic, which is not observed in real executions.
c667a48
to
7343111
Compare
7343111
to
7730a4c
Compare
d953e55
to
22edabc
Compare
f9d5ea2
to
8ea11f0
Compare
8ea11f0
to
a5d4a38
Compare
80c782e
to
588abec
Compare
We will close this PR for now, but will keep the branch around as a reference for WASM-development in the FVM. |
No description provided.