-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow tests to be updated automatically #744
Conversation
529fea5
to
c4dc229
Compare
0661e83
to
eb85245
Compare
That way, only the result of the first `SolverChoice` is used to "bless" the expected output. Others are simply compared with the first choice.
This is harder than it should be due to the way we write `Display` impls. `ConstrainedSubsts` is wrapped in `Canonical`, a container over a generic `T`, so the `Display` impl for `Canonical` cannot know that the contained type has a `display` method that bundles the interner. As a result, the interner is gone by the time we get to the `Display` impl for `ConstrainedSubsts`. I think a better solution is to implement a custom `DebugWith<Ctxt>` trait (see Jonathan Turner's `lark` for prior art).
eb85245
to
6ad0318
Compare
@jackh726 This is ready to review now (commit-by-commit), but no rush. Enjoy the holidays. |
So, what's the tangible benefit here? Blessable tests? It doesn't seem like it really removes any testing infra. There are definitely some niceties that can be split out regardless of whether we land this - notably the test output formatting changes. Any way it's possible to have the test macro generate the |
Blessable tests. I'd encourage you to ask around before pooh-poohing them.
I don't think so. |
cc @nikomatsakis, that's relevant to the "what snapshot testing we should use for salsa" question |
Yeah, I found expect-style tests to be a force multiplier. Another big benefit is that you can refactor the testing framework itself. Ie, if the shape of the thing you test changes, you don't have to do Looking at this PR, it seems that you already have a dedicate testing macro: test! {
program {
#[lang(copy)]
trait Copy { }
struct Foo { }
impl Copy for Foo { }
}
goal {
forall<const N> {
[Foo; N]: Copy
}
} yields {
"Unique"
}
} I don't have a full context here, but I'd probably do a different thing that this PR does. Instead, I'd do either of the two:
program! {
#[lang(copy)]
trait Copy { }
struct Foo { }
impl Copy for Foo { }
}.check(
goal! {
forall<const N> {
[Foo; N]: Copy
}
},
expect![["Unique"]]
); That is, the approach of this PR of putting What we do in rust-analyzer is that we use raw-strings for both input and output of the tests: check(r#"
fn some_rust_code() {
}
"#,
expect![["some debug output"]]
); |
To be concrete, I'm interested in simplifying lifetime constraints arising from subtyping relationships on higher-ranked types for use in Polonius, as well as (in the longer-term) extending Rust's type system to support constraints on those types (e.g.
All this to avoid a macro invocation? In addition to vendoring
I probably would have implemented |
I'm convinced enough by the arguments here. But, I'd at least like to run this by @nikomatsakis first (though, I doubt he'll have strong feelings one way or the other). |
&choices[0], | ||
&choices[i + 1] | ||
); | ||
assert_same(head, other); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert_same
strips whitespace, uses prefix matching, and has a different diff output than Expect::assert_eq
, so the error that occurs when two solvers don't match looks quite different from the one that occurs when the expected output is wrong. Expect::assert_eq
does not work without something to bless, though it could be made to with rust-analyzer/expect-test#22. If this is merged, I expect that uses of assert_same
will slowly be removed in favor of Expect
, so I don't think we need to resolve this right now. Others may disagree, however.
6ad0318
to
1627b6e
Compare
@nikomatsakis and I talked about this and agreed that this is probably a good idea :) I just need to devote some time to do a final quick review before I r+ this (so please ping me if I forget and this sits) |
Quick comment: can we use one of the existing libraries for this? (I didn't even read the PR, sorry...) e.g., insta or https://github.com/rust-analyzer/expect-test |
It's using |
well then I'll just go away :) |
@bors r+ |
📌 Commit 1627b6e has been approved by |
☀️ Test successful - checks-actions |
Uses the
expect-test
crate (thanks for the recommendation @lnicola!) to implement snapshot testing.expect-test
is a bit more flexible thaninsta
, so it seems like a good fit. This gives nicer diffs as well, since we don't strip whitespace prior to the comparison.Setting
UPDATE_EXPECT=1
before runningcargo test
will update all tests automatically.Blockers
Solution
'sDisplay
impl.chalk
currently compares prefixes to make things less noisy, but that seems error prone anyways.yields
, which compares the results of the two solvers against a known output. Unfortunately, the test macro will duplicate theexpect
invocation, so if neither output matches the expected one,expect-test
will overwrite it twice, almost assuredly corrupting the test file.I've hacked around it here by adding a method toMost robust is to collect the various engine results first and only compare them against the output if they agree. This is implemented now.Expect
(which is not yet upstream).expect
breaks updating rust-analyzer/expect-test#20Nice to haves
expect![[
? It's currently hard-coded into theexpect-test
crate (Allow users to rebindexpect!
rust-analyzer/expect-test#26).