-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recent 12% perf regression in unused-warnings #52092
Comments
The wall time measurements look a bit more severe :( also cc @nnethercote |
perf.rust-lang.org now clearly points to PR #51895: |
visiting for triage. Assigning to @nikomatsakis for now, under assumption that he is best person to investigate this in the very short term. |
also making P-high for now. (I'm not 100% clear on how important the efficiency of this particular benchmark is, but it is probably worth treating it as high until someone can provide an argument for deprioritization.) |
I can investigate, but I'm not all that worried about this particular benchmark. It'd be good to do a quick profile though. I can't really imagine how this would make things slower apart from the fact that I did add a bit of data to metadata -- so maybe it's just the extra encoding costs? |
I've not done any investigation this week. I lost access to my linux machine I use for profiling, so it will have to wait until 2 weeks from now when I'm back home. (Unless someone else wants to take a look.) |
visiting for triage, assigning to self in hopes I find time to look at this during next week. |
unassigning self since I will be absent for three weeks after end of day tomorrow. |
The relevant test file mainly consists of 20,000 I had looked at this, and it is at least 90% accountable to the addition of 2*20,000 query calls during the processing:
This does feel like an edge-case to me - in most cases I can think of, the query overhead of actually collecting an item - which is just a few microseconds per item (about 1-2μs/query cache miss, so, including all the actions However, if we want to just "get rid of the regression", it might be a good idea to add a bypass to And of course it's always a good idea to speed up queries. |
I would lower this issue to P-medium unless someone actually cares about this case, this code is going to get refactored to death anyway. |
downgrading to P-medium on basis on analysis and arguments presented by @arielb1 above. |
Given it's been 5(!) years I don't think there's value in keeping this open -- at best, this is tracking "can we make unused-warnings faster?" which feels like not a useful thing to track. |
A recent perf regression showed up somewhere in this range. I don't think perf has enough data to narrow it down further yet to a particular PR, but of the PRs in that list #51895 looks the most suspect.
@nikomatsakis or @tmandry y'all may be interested in this!
The text was updated successfully, but these errors were encountered: