Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Optimization] Equivalent functions are not merged because their panic information differs. #80021

Closed
LunarLambda opened this issue Dec 14, 2020 · 4 comments
Labels
I-heavy Issue: Problems and improvements with respect to binary size of generated code.

Comments

@LunarLambda
Copy link

I was playing around in the compiler explorer with APIs using AsRef<T> to provide more ergonomic interfaces.
My example used AsRef<[u8]> to allow an API to use string literals, byte arrays, byte strings, etc.

I wanted to know if there was an elegant way to avoid monomorphization overhead for large functions that only differ by a AsRef::as_ref call at the very top, or if the compiler could do this for me in optmized builds.

My tests can be found here:

https://godbolt.org/z/5MzKMT

However, far more interestingly, I saw that asref_inner and asref_direct were not merged by the compiler in an optimized build, despite being 100% the same code.

The only difference is that they have different source location info for a potential panic.
Perhaps this is a contrived case, since fully equivalent functions are probably rare in real code,
however I still thought it was worth bringing up.

Perhaps the functions could be merged if the panic information was smuggled through a register or the stack at the call site instead?

@camelid camelid added A-mir-opt Area: MIR optimizations I-heavy Issue: Problems and improvements with respect to binary size of generated code. labels Dec 14, 2020
@nagisa
Copy link
Member

nagisa commented Dec 14, 2020

Extremely unlikely to be actionable from the standpoint of code optimizations. If there's any change made to improve this particular example, it'd have to be around how much panic information we preserve (or scrub), potentially exposed via some compiler flag.

(Changes to how panic information is passed in has been rejected in multiple instances in the past, because it is optimised for amount of code generated and regressions there are undesirable).

Removing A-mir-opt for now, but also inclined to just close this issue.

@nagisa nagisa removed the A-mir-opt Area: MIR optimizations label Dec 14, 2020
@jyn514
Copy link
Member

jyn514 commented Dec 15, 2020

I wanted to know if there was an elegant way to avoid monomorphization overhead for large functions that only differ by a AsRef::as_ref call at the very top, or if the compiler could do this for me in optmized builds.

This is #77960.

@LunarLambda
Copy link
Author

Extremely unlikely to be actionable from the standpoint of code optimizations. If there's any change made to improve this particular example, it'd have to be around how much panic information we preserve (or scrub), potentially exposed via some compiler flag.

(Changes to how panic information is passed in has been rejected in multiple instances in the past, because it is optimised for amount of code generated and regressions there are undesirable).

Removing A-mir-opt for now, but also inclined to just close this issue.

I see

I wanted to know if there was an elegant way to avoid monomorphization overhead for large functions that only differ by a AsRef::as_ref call at the very top, or if the compiler could do this for me in optmized builds.

This is #77960.

Thank you!

@Enselic
Copy link
Member

Enselic commented Dec 16, 2023

Triage: This was proposed to be closed as "not feasible". Let's close it now.

@Enselic Enselic closed this as not planned Won't fix, can't repro, duplicate, stale Dec 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I-heavy Issue: Problems and improvements with respect to binary size of generated code.
Projects
None yet
Development

No branches or pull requests

5 participants