Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Profiling, profile-guided optimization, and deoptimization #7235

Closed
DemiMarie opened this issue Jan 13, 2017 · 5 comments
Closed

Profiling, profile-guided optimization, and deoptimization #7235

DemiMarie opened this issue Jan 13, 2017 · 5 comments
Assignees
Labels
Milestone

Comments

@DemiMarie
Copy link

This is a proposal for profile-guided optimization and deoptimization, much as is performed by the JVM. This will allow all of the following:

  • Devirtualization when (given the currently loaded classes) only one type could be the target of the method invocation.
  • Speculative branch analysis: If profiling data shows that a branch is always/never taken, it is possible to assume that the branch will continue to be always/never taken, and not generate code for the other path, instead generating a bailout.
  • Speculative devirtualization: if a virtual/interface method has only had one callee seen so far, the call site can be replaced with a test and then a direct call.
@benaadams
Copy link
Member

Speculative devirtualization:

Related issue https://github.com/dotnet/coreclr/issues/8819

@fiigii
Copy link
Contributor

fiigii commented Feb 4, 2017

Good purpose, devirtualization is very important for dynamic dispatch programming languages. However, devirtualization is not always related with PGO. As far as I know, there are two other solutions to it, which do not requires changing the architecture of CoreCLR & RyuJIT.

  1. Static class hierarchy analysis:
    This static analysis can execute fast enough to be adopted in a real-world compiler (RyuJIT), and actually CHA has different overhead when we choose different precision. Meanwhile, CHA is a "static" analysis that do not need profiling feedback and multi-tiered JIT compilers. Therefore, this is a good choice for crossgen and current RyuJIT.
    image
  2. Control flow analysis:
    Control flow analysis (CFA) or called value-flow analysis is from the functional programming language area. Comparing to CHA, CFA algorithms (0-CFA, XTA, m-CFA, etc.) always achieve much better precision, but they are all too slow to be used in JIT compilers. Therefore, this is worthy to be considered for crossgen AOT compiler.

Reference
Dean, Jeffrey, David Grove, and Craig Chambers. "Optimization of object-oriented programs using static class hierarchy analysis." European Conference on Object-Oriented Programming. Springer Berlin Heidelberg, 1995.
Tip, Frank, and Jens Palsberg. Scalable propagation-based call graph construction algorithms. Vol. 35. No. 10. ACM, 2000.

@AndyAyersMS
Copy link
Member

The challenge we face using any of these analyses is that they potentially become invalid once new classes are loaded. In general the optimizations they enable are speculative. Taking advantage of the information they provide would require work, possibly quite extensive work, in both JIT and VM.

Also note that there are important differences between Java/JVM and C#/CLR that can make it tricky to assess the benefit of particular classes of optimizations.

@msftgits msftgits transferred this issue from dotnet/coreclr Jan 31, 2020
@msftgits msftgits added this to the Future milestone Jan 31, 2020
@AndyAyersMS AndyAyersMS mentioned this issue Oct 19, 2020
54 tasks
@trylek
Copy link
Member

trylek commented Apr 4, 2022

@AndyAyersMS / @davidwrighton - is this issue still relevant or are we tracking it elsewhere?

@AndyAyersMS
Copy link
Member

Most of this is now implemented, though we don't fully support partial compilation (we can only do it when not optimizing, and it's not really driven by profile data), and we still haven't enabled partial compilation by default.

So I think we can close this.

@ghost ghost locked as resolved and limited conversation to collaborators May 7, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

7 participants