Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow ui tests to have dependencies in a reliable way #2373

Merged
merged 1 commit into from
Jul 20, 2022
Merged

Conversation

oli-obk
Copy link
Contributor

@oli-obk oli-obk commented Jul 14, 2022

This completely sidesteps the issue that compiletest-rs has where old artifacts of a dependency cause multiple available crates of name XXX errors. At this point I think we've reached feature parity for clippy, too, so I'm going to try publishing a version once this is merged.

miri Outdated Show resolved Hide resolved
@RalfJung
Copy link
Member

Could you describe the design at a high level? That would help with reviewing. :)

@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 15, 2022

  1. There is a crate that we build once to generate all dependencies
  2. we grab all the artifacts that were built for that crate
  3. we link every test against all those artifacts

coincidentally this removes the implicit trick of using dev-dependencies from the main miri crate for tests via extern crate imports. the dev-deps scheme can require cargo clean when updating dependencies as you'd then get "multiple imports found" when there are two versions available

@RalfJung
Copy link
Member

this removes the implicit trick of using dev-dependencies from the main miri crate for tests via extern crate imports

I never heard of this, no idea what you are talking about here.

@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 15, 2022

I never heard of this, no idea what you are talking about here.

we have lots of tests that use extern crate libc; which just accidentally works by picking the libc from the sysroot. But with compiletest-rs you could have picked any dev-dependency and used that (clippy tests against usages of serde types and derives in its ui tests this way)

@RalfJung
Copy link
Member

Oh lol, I had no idea. Like, taking the sysroot libc seems fine, but I didn't know the other thing would have worked. Good thing I didn't.^^

@bors
Copy link
Contributor

bors commented Jul 18, 2022

☔ The latest upstream changes (presumably #2380) made this pull request unmergeable. Please resolve the merge conflicts.

@oli-obk oli-obk changed the title Allow ui tests to have reliable dependencies Allow ui tests to have dependencies in a reliable way Jul 18, 2022
@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 18, 2022

Do you want to review this? Or can I merge myself?

@RalfJung
Copy link
Member

It's in my queue :)

Copy link
Member

@RalfJung RalfJung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow this is clever, I like it a lot. :)

miri Outdated Show resolved Hide resolved
test_dependencies/Cargo.toml Show resolved Hide resolved
tests/compiletest.rs Outdated Show resolved Hide resolved
tests/compiletest.rs Outdated Show resolved Hide resolved
ui_test/Cargo.toml Show resolved Hide resolved
ui_test/README.md Outdated Show resolved Hide resolved
ui_test/src/dependencies.rs Show resolved Hide resolved
ui_test/src/dependencies.rs Outdated Show resolved Hide resolved
@RalfJung
Copy link
Member

We will probably want to move some tests from test-cargo-miri to here, in particular the getrandom tests. Those use the renaming feature of cargo:

getrandom_1 = { package = "getrandom", version = "0.1" }
getrandom_2 = { package = "getrandom", version = "0.2" }

Will that work?

@RalfJung
Copy link
Member

RalfJung commented Jul 19, 2022

Oh also please check that this works in the rustc workspace via x.py. The ./miri will probably not be the right call there...

@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 19, 2022

Will that work?

I was afraid of this ^^ it doesn't work with my current scheme. The fix is obvious and I knew I should've done it this way, but 🤷

Basically what I need to do is to also run cargo metadata to get these renames and how they map to the artifacts

EDIT: works now correctly

ui_test/src/lib.rs Outdated Show resolved Hide resolved
@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 19, 2022

Oh also please check that this works in the rustc workspace via x.py. The ./miri will probably not be the right call there...

even the latest version doesn't work from rustc yet, so this PR is not ready yet

ui_test/src/lib.rs Outdated Show resolved Hide resolved
@RalfJung
Copy link
Member

even the latest version doesn't work from rustc yet, so this PR is not ready yet

@rustbot author

@rustbot rustbot added the S-waiting-on-author Status: Waiting for the PR author to address review comments label Jul 19, 2022
tests/pass/random.rs Outdated Show resolved Hide resolved
@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 19, 2022

status update

I can reproduce the failure with x.py now: running

cargo run --manifest-path=cargo-miri/Cargo.toml -- miri run --manifest-path=test_dependencies/Cargo.toml

via the bootstrap machinery will suceed to build cargo-miri (that already worked for cargo miri setup), but fail when cargo-miri tries to build build scripts (or proc macros), as they can't find the libstd.

edit: manually adding --sysroot = build/...../stage1 makes it compile, now I gotta figure out why/where we're losing that

miri Outdated Show resolved Hide resolved
@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 20, 2022

@rustbot ready

@rustbot rustbot added S-waiting-on-review Status: Waiting for a review to complete and removed S-waiting-on-author Status: Waiting for the PR author to address review comments labels Jul 20, 2022
cargo-miri/bin.rs Outdated Show resolved Hide resolved
cargo-miri/bin.rs Outdated Show resolved Hide resolved
miri Outdated Show resolved Hide resolved
@RalfJung
Copy link
Member

@rustbot author

@rustbot rustbot added S-waiting-on-author Status: Waiting for the PR author to address review comments and removed S-waiting-on-review Status: Waiting for a review to complete labels Jul 20, 2022
@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 20, 2022

@rustbot ready

@RalfJung
Copy link
Member

r=me with run-test.py fixed and commits squashed

@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 20, 2022

@bors r=RalfJung

@bors
Copy link
Contributor

bors commented Jul 20, 2022

📌 Commit ab6fb9d has been approved by RalfJung

It is now in the queue for this repository.

@bors
Copy link
Contributor

bors commented Jul 20, 2022

⌛ Testing commit ab6fb9d with merge 1366bf6...

@bors
Copy link
Contributor

bors commented Jul 20, 2022

☀️ Test successful - checks-actions
Approved by: RalfJung
Pushing 1366bf6 to master...

//@only-target-linux: the errors differ too much between platforms

#[tokio::main]
async fn main() {}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw would be nice to also have a 'pass' test for #602 (comment).

Should we have these tests of non-stdlib crates in a subdir in our test folders? The root dirs are already quite messy, so something like {pass,fail}/crates or so might make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea grouping them into a custom folder makes sense.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#2408 does these changes.

@oli-obk oli-obk deleted the test_dependencies branch July 21, 2022 12:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-author Status: Waiting for the PR author to address review comments
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants