-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Isolate each unit test in a self-contained file system #74
Comments
Alternately, we could use a fictitious but static directory ( |
In With No matter how many platforms I test on, platform-dependent test errors still come along and surprise me every once in a while. Several months ago, for example, the tests in a previous CRAN release were failing on my personal Ubuntu box because |
I wonder, do the unit tests for storr all pass on your machine? |
I think I fixed it. I tried a I was able to recreate the issue by running a ⋯ Documents drake R
> drake::in_progress()
[1] "small"
> devtools::test()
Loading drake
Loading required package: testthat
Testing drake
basic: ..........................
cache: ...1.....................................2......................
command-changes: ........
edge-cases: .....................
envir: .....
examples: ....
full-build: .............................................
generate: ...................
import-file: ..........
import-object: ..........
intermediate-file: ......
Makefile: .............................
namespaced: .........
other-features: 3............4..................................
plan: .............
Failed -------------------------------------------------------------------------
1. Failure: cache functions work (@test-cache.R#11) ----------------------------
in_progress(search = FALSE) not equal to character(0).
Lengths differ: 1 vs 0
2. Failure: cache functions work (@test-cache.R#114) ---------------------------
in_progress(search = TRUE, path = s) not equal to character(0).
Lengths differ: 1 vs 0
3. Failure: in_progress() works (@test-other-features.R#6) ---------------------
in_progress() not equal to character(0).
Lengths differ: 1 vs 0
4. Failure: deps() correctly reports dependencies of functions and commands (@test-other-features.R#51)
deps(my_plan$command[2]) not equal to c("'tracked_input_file.rds'", "readRDS", "x").
2/3 mismatches
x[1]: "readRDS"
y[1]: "'tracked_input_file.rds'"
x[2]: "'tracked_input_file.rds'"
y[2]: "readRDS"
> So while this may no longer be a bug fix issue, I wonder if it would still be worth implementing after the next round of approvals are up? I know that when I'm looking to write tests for #56, it would be easier for me to not worry about the environment. Also, just tried storr, and those tests run fine. Additionally, I have run into the |
withr::with_dir
for unit tests
Glad you figured it out! And yes, while not technically a bug, I still think this is a problem because drake should pay no attention to Also, I think I just added enough sorting on my end to |
I actually decided to create a separate local file system for each test. test_with_dir <- function(desc, code){
root <- "workspaces"
if (!file.exists(root)){
dir.create(root)
}
relative_dir <- base32_encode(desc) %>%
digest(algo = hash_algorithm)
dir <- file.path(root, relative_dir)
dir_empty(dir)
dir <- normalizePath(dir)
with_dir(dir, test_that(desc = desc, code = code))
# Keep dir until the very end. We should check for hash conflicts. If there are, we need to rename
# the prose descriptions of the tests.
} Rationale:
|
I guess the downside is that drake might still get confused if we test |
By the way, I created a nonempty |
Never mind about the |
Do you get issues with conflicting caches if you have an unfinished (failed) test, and then try to run it again? Either same session or after a restart? Mostly I'm wondering if it would be worth it to put a line to |
I did not check, but because of the |
The approaches above still gave me problems with the file system, so I decided to go back to test_with_dir <- function(desc, code){
root <- tempdir()
stopifnot(file.exists(root))
relative_dir <- base32_encode(desc) %>%
digest(algo = hash_algorithm)
dir <- file.path(root, relative_dir)
dir_empty(dir)
with_dir(dir, test_that(desc = desc, code = code))
} |
Confirmed: everything looks totally clean with the new |
I've been trying to chase down why some of the unit tests (consistently) fail on my local machine.
I'm planning a PR with some of the easy, minor fixes, but one that might be trickier is that anything with
r expect_equal(in_progress(), character(0))
fails for me.My current thought is that
dclean()
isn't actually scrubbing the old plan, becausereport.md
shows up as in progress, every time (even though it's been unlinked).My proposal is to wrap the testthat files in something along the lines of
This would allow us to use the actual
clean()
, and put that through its paces on a real directory, rather than just usingdclean()
. Additionally, this would get rid of any weird coincidental ordering effects, or unexpected persistence from previous tests.The three big issues that I can see with this mothod right now are:
1: I'm not sure exactly how you've implemented #58, but I want to avoid conflicts with that.
2: Since
tempdir()
is the same within a session of R, we would need to make sure to runclean()
at the beginning and end of each test. Alternately, we could put afile.remove(tempdir())
at the end of each test file, to completely scrap the tempdir, and force it to recreate for each test. A slight performance hit, but not huge.3:
tempdir()
actually will keep the same value for child sessions spawned byparallel
which is sort of a mixed bag. Good for testing, since we can still use it with multiplejobs
(although I'm not entirely sure about how it would interact withparallelism = Makefile
). Mostly the warning would be to not run tests in the same session as an actualmake
where something important is being made.The text was updated successfully, but these errors were encountered: