Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experimental: Add Derive Proc-Macro Caching #129102

Open
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

futile
Copy link
Contributor

@futile futile commented Aug 14, 2024

On-Disk Caching For Derive Proc-Macro Invocations

This PR adds on-disk caching for derive proc-macro invocations using rustc's query system to speed up incremental compilation.

The implementation is (intentionally) a bit rough/incomplete, as I wanted to see whether this helps with performance before fully implementing it/RFCing etc.

I did some ad-hoc performance testing.

Rough, Preliminary Eval Results:

Using a version built through DEPLOY=1 src/ci/docker/run.sh dist-x86_64-linux (which I got from here).

Some Small Personal Project:

# with -Zthreads=0 as well
$ touch src/main.rs && cargo +dist check

Caused a re-check of 1 crate (the only one).

Result:

Configuration Time (avg. ~5 runs)
Uncached ~0.54s
Cached ~0.54s

No visible difference.

Bevy:

$ touch crates/bevy_ecs/src/lib.rs && cargo +dist check

Caused a re-check of 29 crates.

Result:

Configuration Time (avg. ~5 runs)
Uncached ~6.4s
Cached ~5.3s

Roughly 1s, or ~17% speedup.

Polkadot-Sdk:

Basically this script (not mine): https://github.com/coderemotedotdev/rustc-profiles/blob/d61ad38c496459d82e35d8bdb0a154fbb83de903/scripts/benchmark_incremental_builds_polkadot_sdk.sh

TL;DR: Two full cargo check runs to fill the incremental caches (for cached & uncached). Then 10 repetitions of touch $some_file && cargo +uncached check && cargo +cached check.

$ cargo update # `time` didn't build because compiler too new/dep too old
$ ./benchmark_incremental_builds_polkadot_sdk.sh # see above

Huge workspace with ~190 crates. Not sure how many were re-built/re-checkd on each invocation.

Result:

Configuration Time (avg. 10 runs)
Uncached 99.4s
Cached 67.5s

Very visible speedup of 31.9s or ~32%.


-> Based on these results I think it makes sense to do a rustc-perf run and see what that reports.


Current Limitations/TODOs

I left some FIXME(pr-time)s in the code for things I wanted to bring up/draw attention to in this PR. Usually when I wasn't sure if I found a (good) solution or when I knew that there might be a better way to do something; See the diff for these.

High-Level Overview of What's Missing For "Real" Usage:

  • Add caching for Bang- and Attr-proc macros (currently only Derive).
    • Not a big change, I just focused on derive-proc macros for now, since I felt like these should be most cacheable and are used very often in practice.
  • Allow marking specific macros as "do not cache" (currently only all-or-nothing).
    • Extend the unstable option to support, e.g., -Z cache-derive-macros=some_pm_crate::some_derive_macro_fn for easy testing using the nightly compiler.
    • After Testing: Add a #[proc_macro_cacheable] annotation to allow proc-macro authors to "opt-in" to caching (or sth. similar). Would probably need an RFC?
    • Might make sense to try to combine this with Tracking Issue for proc_macro::{tracked_env, tracked_path} #99515, so that external dependencies can be picked up and be taken into account as well (Would maybe need a 2-tiered query for that, so we can save its dependencies after running the proc macro).
  • How to deal with (currently) un-hashable TokenStreams?

So, just since you were in the loop on the attempt to cache declarative macro expansions:

r? @petrochenkov

Please feel free to re-/unassign!

Finally: I hope this isn't too big a PR, I'll also show up in Zulip since I read that that is usually appreciated. Thanks a lot for taking a look! :)

(Kind of related/very similar approach, old declarative macro caching PR: #128747)

@rustbot rustbot added A-query-system Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html) S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Aug 14, 2024
@Kobzol
Copy link
Contributor

Kobzol commented Aug 14, 2024

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Aug 14, 2024
bors added a commit to rust-lang-ci/rust that referenced this pull request Aug 14, 2024
…ng, r=<try>

Experimental: Add Derive Proc-Macro Caching

# On-Disk Caching For Derive Proc-Macro Invocations

This PR adds on-disk caching for derive proc-macro invocations using rustc's query system to speed up incremental compilation.

The implementation is (intentionally) a bit rough/incomplete, as I wanted to see whether this helps with performance before fully implementing it/RFCing etc.

I did some ad-hoc performance testing.

## Rough, Preliminary Eval Results:

Using a version built through `DEPLOY=1 src/ci/docker/run.sh dist-x86_64-linux` (which I got from [here](https://rustc-dev-guide.rust-lang.org/building/optimized-build.html#profile-guided-optimization)).

### [Some Small Personal Project](https://github.com/futile/ultra-game):

```console
# with -Zthreads=0 as well
$ touch src/main.rs && cargo +dist check
```

Caused a re-check of 1 crate (the only one).

Result:
| Configuration | Time (avg. ~5 runs) |
|--------|--------|
| Uncached | ~0.54s |
| Cached | ~0.54s |

No visible difference.

### [Bevy](https://github.com/bevyengine/bevy):

```console
$ touch crates/bevy_ecs/src/lib.rs && cargo +dist check
```

Caused a re-check of 29 crates.

Result:
| Configuration | Time (avg. ~5 runs) |
|--------|--------|
| Uncached | ~6.4s |
| Cached | ~5.3s |

Roughly 1s, or ~17% speedup.

### [Polkadot-Sdk](https://github.com/paritytech/polkadot-sdk):

Basically this script (not mine): https://github.com/coderemotedotdev/rustc-profiles/blob/d61ad38c496459d82e35d8bdb0a154fbb83de903/scripts/benchmark_incremental_builds_polkadot_sdk.sh

TL;DR: Two full `cargo check` runs to fill the incremental caches (for cached & uncached). Then 10 repetitions of `touch $some_file && cargo +uncached check && cargo +cached check`.

```console
$ cargo update # `time` didn't build because compiler too new/dep too old
$ ./benchmark_incremental_builds_polkadot_sdk.sh # see above
```

_Huge_ workspace with ~190 crates. Not sure how many were re-built/re-checkd on each invocation.

Result:
| Configuration | Time (avg. 10 runs) |
|--------|--------|
| Uncached | 99.4s |
| Cached | 67.5s |

Very visible speedup of 31.9s or ~32%.

---

**-> Based on these results I think it makes sense to do a rustc-perf run and see what that reports.**

---

## Current Limitations/TODOs

I left some `FIXME(pr-time)`s in the code for things I wanted to bring up/draw attention to in this PR. Usually when I wasn't sure if I found a (good) solution or when I knew that there might be a better way to do something; See the diff for these.

### High-Level Overview of What's Missing For "Real" Usage:

* [ ] Add caching for `Bang`- and `Attr`-proc macros (currently only `Derive`).
  * Not a big change, I just focused on `derive`-proc macros for now, since I felt like these should be most cacheable and are used very often in practice.
* [ ] Allow marking specific macros as "do not cache" (currently only all-or-nothing).
  * Extend the unstable option to support, e.g., `-Z cache-derive-macros=some_pm_crate::some_derive_macro_fn` for easy testing using the nightly compiler.
  * After Testing: Add a `#[proc_macro_cacheable]` annotation to allow proc-macro authors to "opt-in" to caching (or sth. similar). Would probably need an RFC?
  * Might make sense to try to combine this with rust-lang#99515, so that external dependencies can be picked up and be taken into account as well.

---

So, just since you were in the loop on the attempt to cache declarative macro expansions:

r? `@petrochenkov`

Please feel free to re-/unassign!

Finally: I hope this isn't too big a PR, I'll also show up in Zulip since I read that that is usually appreciated. Thanks a lot for taking a look! :)

(Kind of related/very similar approach, old declarative macro caching PR: rust-lang#128747)
@bors
Copy link
Contributor

bors commented Aug 14, 2024

⌛ Trying commit d47fa70 with merge 6d8226b...

@bors
Copy link
Contributor

bors commented Aug 14, 2024

☀️ Try build successful - checks-actions
Build commit: 6d8226b (6d8226bea18aa0e8df52c967e24efd7c5ee92169)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (6d8226b): comparison URL.

Overall result: ❌✅ regressions and improvements - ACTION NEEDED

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is a highly reliable metric that was used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.6% [0.6%, 0.6%] 1
Regressions ❌
(secondary)
0.2% [0.2%, 0.2%] 1
Improvements ✅
(primary)
-2.8% [-7.9%, -1.3%] 17
Improvements ✅
(secondary)
-0.2% [-0.2%, -0.2%] 1
All ❌✅ (primary) -2.6% [-7.9%, 0.6%] 18

Max RSS (memory usage)

Results (primary -0.3%, secondary -1.8%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.4% [1.4%, 1.4%] 1
Regressions ❌
(secondary)
3.2% [3.2%, 3.2%] 1
Improvements ✅
(primary)
-1.1% [-1.2%, -1.0%] 2
Improvements ✅
(secondary)
-2.5% [-5.3%, -0.7%] 7
All ❌✅ (primary) -0.3% [-1.2%, 1.4%] 3

Cycles

Results (primary -3.9%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-3.9% [-10.7%, -1.6%] 16
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) -3.9% [-10.7%, -1.6%] 16

Binary size

Results (primary -0.1%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-0.1% [-0.1%, -0.1%] 4
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) -0.1% [-0.1%, -0.1%] 4

Bootstrap: 753.65s -> 756.621s (0.39%)
Artifact size: 341.43 MiB -> 341.67 MiB (0.07%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Aug 15, 2024
@futile
Copy link
Contributor Author

futile commented Aug 16, 2024

Ok, I think the performance results look pretty great!

Here are the results for instructions:

Instructions

Summary

Range Mean Count
Regressions 0.20%, 0.60% 0.40% 2
Improvements -7.87%, -0.24% -2.68% 18
All -7.87%, 0.60% -2.37% 20

Primary benchmarks

Benchmark Profile Scenario % Change Significance Factor
cargo-0.60.0 check incr-unchanged -7.87% 86.97x
cargo-0.60.0 check incr-patched: println -7.04% 97.31x
cargo-0.60.0 debug incr-unchanged -3.95% 26.54x
cargo-0.60.0 opt incr-unchanged -3.76% 31.39x
cargo-0.60.0 debug incr-patched: println -3.62% 38.61x
diesel-1.4.8 check incr-unchanged -2.36% 34.49x
diesel-1.4.8 check incr-patched: println -2.29% 24.18x
webrender-2022 check incr-unchanged -2.24% 21.62x
webrender-2022 check incr-patched: println -2.07% 19.32x
diesel-1.4.8 debug incr-unchanged -1.98% 27.36x
diesel-1.4.8 debug incr-patched: println -1.94% 19.31x
diesel-1.4.8 opt incr-unchanged -1.83% 19.58x
diesel-1.4.8 opt incr-patched: println -1.76% 15.07x
webrender-2022 debug incr-unchanged -1.44% 10.68x
webrender-2022 debug incr-patched: println -1.36% 12.79x
webrender-2022 opt incr-unchanged -1.29% 11.78x
cargo-0.60.0 opt incr-patched: println -1.27% 6.98x
cargo-0.60.0 check incr-full 0.60% 4.13x

Secondary benchmarks

Benchmark Profile Scenario % Change Significance Factor
tt-muncher check incr-full -0.24% 4.14x
tt-muncher doc full 0.20% 4.12x

Conclusion

So overall, lots of incr-unchanged and incr-patched (all println) saw noticeable improvements (check, debug and opt), which is similar to what my own rough eval showed. At the same time, only cargo as check saw a regression of 0.60% for incr-full. That is really nice, because incr-full contains the overhead of caching, but not yet any speedup. So seeing that there is only one 'small' regression for incr-full means that incr-full isn't even slowed down noticeably for most of the benchmarks.

The secondary benchmark results look weird, because incr-full got faster, even though there should be some overhead, and doc as full got slower, even though non-incremental builds shouldn't be affected. Not really sure what's going on here.

Also, I think the benchmarks don't contain a lot of "big" projects (like bevy etc.), which might see the biggest speedup from this caching. cargo looks like one of the bigger benchmarks to me, and its check wall-time went from 1.18s -> 1.06s, so it wasn't even high to begin with (although I have no idea how beefy the machine is!). But that probably just means that benefits for large projects might be even nicer :)

Next Steps

Given these results, what's the opinion here? I would think it makes sense to review + clean up the implementation, extend it to all proc-macro types, and to add an unstable option that allows not caching certain proc-macros (e.g., -Zcache-proc-macros-skip=...). Then a next step could be getting this into nightly, and letting people try it out (it would be off by default, of course)?

Then after that, as mentioned before, it might need a new attribute like #[proc_macro_cacheable] to let proc-macro authors opt-into the caching (since we don't want to change the default case). Would this require an RFC for the new attribute at that point?

Anyway, it would be awesome if someone could take a look at/review the diff so far, and to also get some feedback on possible next steps. Thanks a lot already! :)

P.S.: Big kudos to this blog post which found these possible gains in the first place! Also to @SparrowLii, whose initial implementation for declarative macro caching provided me the much-needed basic structure for this implementation as well!

@memoryruins
Copy link
Contributor

@futile

Then after that, as mentioned before, it might need a new attribute like #[proc_macro_cacheable] to let proc-macro authors opt-into the caching (since we don't want to change the default case). Would this require an RFC for the new attribute at that point?

As far as I've seen, the default case has been to assume that caching can happen (and cases such as stateful proc-macros have not been supported), even if some proc-macros haven't treated it this way. Examples issues of this are #44034 (comment) and #63804 (comment), and #44034 (comment) (and the following comments) notes the IDE case, where language servers such as rust-analyzer and RustRover may cache proc-macro expansions already.

Not a reference, but another example of how an IDE has treated them https://blog.jetbrains.com/rust/2022/12/05/what-every-rust-developer-should-know-about-macro-support-in-ides/#what-every-rust-macro-implementor-should-take-into-account

If you still want to write a procedural macro, avoid stateful computations and I/O. It’s never guaranteed when and how many times macro expansion is going to be invoked.

I'm not arguing for or against a flag/attribute, nor am I a reviewer, but saw this wonderful PR and thought to add some context I've seen to help make a decision. It's possible there have been further discussions elsewhere that I have missed.

Thank you for exploring an implementation of caching proc-macro expansions!

@futile
Copy link
Contributor Author

futile commented Aug 16, 2024

As far as I've seen, the default case has been to assume that caching can happen (and cases such as stateful proc-macros have not been supported), even if some proc-macros haven't treated it this way. Examples issues of this are #44034 (comment) and #63804 (comment), and #44034 (comment) (and the following comments) notes the IDE case, where language servers such as rust-analyzer and RustRover may cache proc-macro expansions already.

Not a reference, but another example of how an IDE has treated them https://blog.jetbrains.com/rust/2022/12/05/what-every-rust-developer-should-know-about-macro-support-in-ides/#what-every-rust-macro-implementor-should-take-into-account

Thank you for bringing that up, I wasn't aware of it, definitely helps! :) The reason I wanted to not just "break" existing libraries is to not make life unnecessarily harder for libraries like sqlx, which rely on this behavior to a certain point. I think it makes sense to coordinate with #99515, which would give proc macro authors something clear to rely on, and would at the same time help to reduce the "fallout" from starting to cache proc macro invocations.

But this probably means that a new opt-in attribute/etc. will not be necessary, so that's really nice, thanks! :)

I'm not arguing for or against a flag/attribute, nor am I a reviewer, but saw this wonderful PR and thought to add some context I've seen to help make a decision. It's possible there have been further discussions elsewhere that I have missed.

Thank you for exploring an implementation of caching proc-macro expansions!

Thanks, very appreciated! :)

@futile futile force-pushed the experimental/proc-macro-caching branch 2 times, most recently from 64058f4 to abb1ba2 Compare August 19, 2024 15:09
@futile
Copy link
Contributor Author

futile commented Aug 19, 2024

Updated with some cleanup. Got rid of some unnecessary code, and split out a small change to error handling in the query system into #129271.

Otherwise same as before, ready for review :)

matthiaskrgr added a commit to matthiaskrgr/rust that referenced this pull request Aug 19, 2024
…-panic, r=michaelwoerister

Prevent double panic in query system, improve diagnostics

I stumbled upon a double-panic in the query system while working on something else (rust-lang#129102), which hid the real error cause for what I was debugging. This PR remedies that, so unwinding should be able to present more errors. It shouldn't really be relevant for code that doesn't ICE.
rust-timer added a commit to rust-lang-ci/rust that referenced this pull request Aug 19, 2024
Rollup merge of rust-lang#129271 - futile:query-system/prevent-double-panic, r=michaelwoerister

Prevent double panic in query system, improve diagnostics

I stumbled upon a double-panic in the query system while working on something else (rust-lang#129102), which hid the real error cause for what I was debugging. This PR remedies that, so unwinding should be able to present more errors. It shouldn't really be relevant for code that doesn't ICE.
@petrochenkov petrochenkov marked this pull request as ready for review August 20, 2024 17:04
@rustbot
Copy link
Collaborator

rustbot commented Aug 20, 2024

These commits modify the Cargo.lock file. Unintentional changes to Cargo.lock can be introduced when switching branches and rebasing PRs.

If this was unintentional then you should revert the changes before this PR is merged.
Otherwise, you can ignore this comment.

// This test tests that derive-macro execution is cached.
// HOWEVER, this test can currently only be checked manually,
// by running it (through compiletest) with `-- --nocapture --verbose`.
// The proc-macro (for `Nothing`) prints a message to stderr when invoked,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically printing to stderr is a side effect and it makes the macro non-cacheable.
(But it's fine for testing, and if printing is not mandatory for the macro to work.)

// by running it (through compiletest) with `-- --nocapture --verbose`.
// The proc-macro (for `Nothing`) prints a message to stderr when invoked,
// and this message should only be present during the second invocation
// (which has `cfail2` set via cfg).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why second?
In the first run we actually execute the macro and print to stderr, in the second one we take the result from cache and do not print, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, left over from a previous version of this test, will remove it. Yes exactly, the second run should use the cached result and not print the messages 👍

// The proc-macro (for `Nothing`) prints a message to stderr when invoked,
// and this message should only be present during the second invocation
// (which has `cfail2` set via cfg).
// FIXME(pr-time): Properly have the test check this, but how? UI-test that tests for `.stderr`?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why do you want to test stderr output, incremental tests have directives for testing invalidation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's what the "Properly have the test check this, but how?" was for, I simply didn't see how I can check this. There are directives, but I didn't figure out a way to make them work for this case. Because the directive needs to be attached to the proc macro output I think, but that was weird somehow. I will try again and see what exactly the problem was with that.


//@ aux-build:derive_nothing.rs
//@ revisions:cfail1 cfail2
//@ compile-flags: -Z query-dep-graph
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's required for rustc_partition_codegened and rustc_partition_reused, iirc. Though I'm not sure if they are useful/a good thing to apply in this test.

@@ -1637,6 +1637,8 @@ options! {
"emit noalias metadata for box (default: yes)"),
branch_protection: Option<BranchProtection> = (None, parse_branch_protection, [TRACKED],
"set options for branch target identification and pointer authentication on AArch64"),
cache_all_derive_macros: bool = (true, parse_bool, [UNTRACKED],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cache_all_derive_macros: bool = (true, parse_bool, [UNTRACKED],
cache_proc_macros: bool = (true, parse_bool, [UNTRACKED],

For compatibility with future extensions (bang and attr macro caching, and explicit opt-ins/opt-outs).

},
);
let proc_macro_backtrace = ecx.ecfg.proc_macro_backtrace;
let strategy = crate::proc_macro::exec_strategy(ecx);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't need ecx, only session.

);
let proc_macro_backtrace = ecx.ecfg.proc_macro_backtrace;
let strategy = crate::proc_macro::exec_strategy(ecx);
let server = crate::proc_macro_server::Rustc::new(ecx);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, Rustc does needs ecx and uses it in nontrivial ways, which may break macro caching.
This is basically a side channel through which data can flow to and from the macro without query system noticing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the logic in Rustc::new doesn't need ecx though, except for storing ecx itself into the field of course.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is why I didn't bother replacing the other uses of ecx before, because I saw that this is tied pretty strongly to ecx. But I can probably take care of the replaceable uses in another commit, since it should be mostly independent I'd think.

if tcx.sess.opts.unstable_opts.cache_all_derive_macros {
tcx.derive_macro_expansion(key).cloned()
} else {
crate::derive_macro_expansion::provide_derive_macro_expansion(tcx, key).cloned()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to setup any global context in this case, we could just pass arguments, the context only needs to be setup around the query call.

@@ -0,0 +1,125 @@
use std::cell::Cell;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's probably better to merge this file into compiler/rustc_expand/src/proc_macro.rs and put the new content in the bottom.

@@ -103,6 +104,13 @@ pub use plumbing::{IntoQueryParam, TyCtxtAt, TyCtxtEnsure, TyCtxtEnsureWithValue
// Queries marked with `fatal_cycle` do not need the latter implementation,
// as they will raise an fatal error on query cycles instead.
rustc_queries! {
query derive_macro_expansion(key: (LocalExpnId, Svh, &'tcx TokenStream)) -> Result<&'tcx TokenStream, ()> {
// eval_always
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect that eval_always may indeed be needed here because the query may access untracked data, but I'm not sure, this whole PR will need another review from some query system / incremental compilation expert.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just so I understand correctly, eval_always would prevent caching completely, no? Or will it always execute the query, but, if the output stays the same, not mark dependent nodes as "require recomputation"?

@petrochenkov petrochenkov added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Aug 20, 2024
@petrochenkov
Copy link
Contributor

This will clearly need an opt-in opt out attribute (or better a modifier for #[proc_macro*] attributes), but it can be done in a separate PR.
I'm not sure if we should enable caching by default even on edition boundary with a migration lint, it may be too easy to get an incorrect build while not knowing or forgetting about the macro caching.

@futile
Copy link
Contributor Author

futile commented Aug 24, 2024

Thank you for the review! I'm currently a bit busy, but hope I'll get to addressing all the feedback in the code soon! :)

This will clearly need an opt-in opt out attribute (or better a modifier for #[proc_macro*] attributes), but it can be done in a separate PR.

Yep, that was also my initial thought. I guess it might make sense to also ask on, e.g., internals.r-l.o, or on Zulip, to gather opinions on this? But I'll do that when we get that far.

Regarding the modifier: I scouted that out at first as well, but I think one or two of the proc-macro types already use modifiers (I'm thinking of #[proc_macro_bang(some_modifier)] etc.), and it felt like that might be confusing when mixed together.

I'm not sure if we should enable caching by default even on edition boundary with a migration lint, it may be too easy to get an incorrect build while not knowing or forgetting about the macro caching.

Note that, just as #129102 (comment) mentions, this is already the case. If a proc macro depends on some file/external state, but nothing else about the code has changed, cargo build will already not do anything, and thus also not execute the proc macro.

That doesn't mean I'm against being careful about adding caching for proc macros, on the contrary, I think it makes sense to spread information about this proactively before anything is merged. Just that so far these guarantees have strictly not been there either, also rust-analyzer is already not re-executing proc-macros all the time.

I think coordinating with #99515 makes a lot of sense, because that would add a way to properly track external dependencies on files and env-vars, which would actually give proc-macro authors a reliable option in general.

but I'm not sure, this whole PR will need another review from some query system / incremental compilation expert.

Would you have someone in mind? For now I'll fix this round of feedback first, so got a bit to do anyway. Also wanted to add caching for the other proc-macro types as well (does that make sense already?). Otherwise, what would be the correct way to find somebody? I can definitely ask on Zulip, if that is a good way to go about this.

Again, thanks for the review!

@petrochenkov
Copy link
Contributor

Could you do @rustbot ready to update the S-waiting-on-* label when this is ready for the next review round?

@futile
Copy link
Contributor Author

futile commented Aug 25, 2024

Could you do @rustbot ready to update the S-waiting-on-* label when this is ready for the next review round?

Ah right, yep, will do 👍

@estebank
Copy link
Contributor

While introducing this in nightly, I think it would be interesting to have a transition mode where the expansion is still run (causing us not to get any speed benefits) but where we can warn, ICE or log that a specific proc-macro happened to not be idempotent. As part of the metrics effort, we're looking for use-cases of questions we want answered, and this seems like it would be an excellent use-case for it. This way we could get information on the prevalence of such macros, even from smaller crates that we don't currently know about. We also would need this mode even after we provide this feature (regardless of whether it is opt-in or opt-out) for crate authors to validate that they haven't accidentally made an uncacheable proc-macro that was meant to be idempotent (as unlikely as that might be).


Regarding linting, there's a problem I can foresee: cap-lints precludes lints from being emitted for dependencies, and the way that crate authors are most likely testing their proc-macros is by using them, which makes the crate that holds the proc-macro a dependency, so they would never see those lints. If we make these lints instead always trigger, then we would be spamming every user of these proc-macros, and not just the authors. I believe we can tweak the behavior of cap-lints to solve this, either through configuration or heuristics, but we need to be aware of that.

@weiznich
Copy link
Contributor

weiznich commented Sep 2, 2024

I spend some time to write up some ideas for how an API for opting into this caching could look like. You can find the details here: https://hackmd.io/khUFNws6R5WbwcTUFIB9lw

@bjorn3
Copy link
Member

bjorn3 commented Sep 2, 2024

Regarding linting, there's a problem I can foresee: cap-lints precludes lints from being emitted for dependencies, and the way that crate authors are most likely testing their proc-macros is by using them, which makes the crate that holds the proc-macro a dependency, so they would never see those lints. If we make these lints instead always trigger, then we would be spamming every user of these proc-macros, and not just the authors. I believe we can tweak the behavior of cap-lints to solve this, either through configuration or heuristics, but we need to be aware of that.

We use --cap-lints when compiling non-local dependencies. Lints are still emitted while compiling a local crate, even when the lint originates from a macro defined in a non-local dependency.

@futile futile force-pushed the experimental/proc-macro-caching branch from abb1ba2 to 267a5c4 Compare September 9, 2024 12:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-query-system Area: The rustc query system (https://rustc-dev-guide.rust-lang.org/query.html) perf-regression Performance regression. S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants