Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize Tracing Implementation for Lower Runtime Overhead #25

Closed
AbdelStark opened this issue Oct 26, 2023 · 3 comments
Closed

Optimize Tracing Implementation for Lower Runtime Overhead #25

AbdelStark opened this issue Oct 26, 2023 · 3 comments
Labels

Comments

@AbdelStark
Copy link
Collaborator

AbdelStark commented Oct 26, 2023

Summary

The current implementation proposed in #23 introduces a runtime overhead for each instruction to check if tracing is enabled or not. This issue discusses potential ways to optimize this.

Problem

The existing method for tracing checks at runtime if the tracing feature is enabled, which incurs a small performance cost for every instruction in the VM.

Proposed Solutions

1. Use Inline Function with Comptime Check

  • What: Create an inline function with a comptime check to see if tracing is activated.
  • Pros: No runtime overhead as the check and branch won't be generated if tracing is disabled. Also, this is a good opportunity to explore Zig's comptime feature.
  • Cons: The tracing flag is set at runtime via CLI.
  • Mitigation: Throw an error if CLI argument is true but the VM has not been compiled with tracing activated.

2. Allocate Tracing Scratch Memory Upfront

  • What: Pre-allocate some memory specifically for tracing.
  • Pros: Amortizes the memory-growing cost. Reduces the overhead of array resizing (ArrayList resizes by doubling the memory when capacity is reached).
  • Cons: Allocates memory that might go unused for small programs.
  • Mitigation: Future versions could make this configurable for different scenarios.

TL;DR

  • Use an inline function with a comptime check for tracing to eliminate runtime overhead.
  • Pre-allocate tracing scratch memory to amortize memory-growing costs.

Questions

  • What do you think about these proposed solutions?
  • Are there other trade-offs or mitigations to consider?

Next Steps

  • These optimizations can be made in two follow-up PRs.
  • We can merge the current PR without these optimizations.
@nils-mathieu
Copy link
Collaborator

nils-mathieu commented Oct 26, 2023

is there any benchmarks I could check to see eventual regressions due to checking the tracing at each instructions? I'd love to work on that.

Also, another way to do it (though I don't know how fast it would be due to cache misses) would be to dynamically dispatch a function. If tracing is enabled, the function does its job without checking whether tracing is enabled, otherwise, it does nothing and we only have the overhead of the function call.

I think this can be mitigated if the function is hot and the CPU can keep it in cache long enough.

Copy link

There hasn't been any activity on this issue recently, and in order to prioritize active issues, it will be marked as stale.
Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by leaving a 👍
Because this issue is marked as stale, it will be closed and locked in 7 days if no further activity occurs.
Thank you for your contributions!

@github-actions github-actions bot added the stale label Jan 26, 2024
@StringNick
Copy link
Collaborator

done through #514

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants