-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using const fn constructor instead of struct expression with large number of constants causes compilation to fail because of large memory allocation. #68957
Comments
I know 42.000+ constants is probably not the best idea, but with the struct expression case finishing within 5s I would expect the compiler to be able to handle them, as the const constructor does exactly the same thing. |
The hot code here is |
@rustbot claim |
Yes I can confirm that on the current nightly memory usage by rustc is way lower (not claiming multiple GBs of RAM anymore), resulting in not running out of memory anymore and succesfull compilation at the speeds you mention. I also tried with all named Unicode code points included (including names like Thanks for taking the time to make Rust work with this admittedly non-standard code. |
This comment has been minimized.
This comment has been minimized.
@CDirkx if the issue has been resolved to your satisfaction, feel free to close it. #68837 was the PR that made the big difference here. There's no rush however. I have another fix in mind in addition to #69072 to make compile-time linear/linearithmic in the number of associated items, so your compile times should continue to improve regardless of the status of this issue. I'm also looking to add a variation of your MCVE to the rustc-perf benchmarks so we don't regress this in the future. Thank you for posting this in the first place. It made finding these rough spots very easy! |
O(log n) lookup of associated items by name Resolves #68957, in which compile time is quadratic in the number of associated items. This PR makes name lookup use binary search instead of a linear scan to improve its asymptotic performance. As a result, the pathological case from that issue now runs in 8 seconds on my local machine, as opposed to many minutes on the current stable. Currently, method resolution must do a linear scan through all associated items of a type to find one with a certain name. This PR changes the result of the `associated_items` query to a data structure that preserves the definition order of associated items (which is used, e.g., for the layout of trait object vtables) while adding an index of those items sorted by (unhygienic) name. When doing name lookup, we first find all items with the same `Symbol` using binary search, then run hygienic comparison to find the one we are looking for. Ideally, this would be implemented using an insertion-order preserving, hash-based multi-map, but one is not readily available. Someone who is more familiar with identifier hygiene could probably make this better by auditing the uses of the `AssociatedItems` interface. My goal was to preserve the current behavior exactly, even if it seemed strange (I left at least one FIXME to this effect). For example, some places use comparison with `ident.modern()` and some places use `tcx.hygienic_eq` which requires the `DefId` of the containing `impl`. I don't know whether those approaches are equivalent or which one should be preferred.
I have:
CodePoint
with aconst fn from(...) -> Self
constructorCodePoint
constants (42.000+) using the constructorBuilding this leads to:
memory allocation of 1207959552 bytes failed
even though I have 12GB of unused RAM
Using struct expressions instead of a
const fn
constructor:Tested on
nightly
andstable
, withinclude!(...)
and copy-pasting: no differences.Source file to reproduce: reproduce.zip
The text was updated successfully, but these errors were encountered: