-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possibility to use drake to run unit tests #206
Comments
@lbusett This is a fantastic idea! I will have a proper response soon. |
I love this idea. The unit tests for I have not actually tried this approach, but here is how I see it starting to play out in real life.
Other notes:
This is already very involved, and I feel strongly that it should be its own separate tool, maybe a package called |
Hi, thanks for your reply. I agree that a As far as I understand, the tricky part here would be to build something like a "matrix" of dependencies for each test (i.e., which internal (from the package itself) and imported functions are used when running the test), so that a run can be made "obsolete" if any of the dependencies changed (i.e., because a new version of an imported function was installed - maybe that could be tracked by saving and comparing sessionInfo). I'll see if in the next weeks I'll find some time to start experimenting on this. Will keep you posted in case. |
Sounds great, @lbusett. Please keep me posted. If we get it to work, this could be a valuable contribution to the package development process. I am focused on #227 at the moment, and I won't have time to work on One of the strengths of The catch here is that nesting stops at functions that look like they're from packages. That means you will have to trick the package you're testing to not look like a package namespace. @dapperjapper found a hack that does this, and I put in the caution vignette. |
Hi, I just started experimenting on this. From my initial tests, however, it seems to me more feasible/easy to exploit the fact that tests are usually organized in files and all tests within a file can be run with test_file(). I already tested parallelizing the tests execution by creating different targets for different calls to "test_file", and it works quite well (though I am struggling a bit with working directory issues). Obviously, it is suboptimal because then if you have one test file which takes much more time than the others because it includes more/longer tests you will not leverage the full possible parallelization speed-up. However, the advantage is that with this workaround does not require to parse the code of the test files to "create" commands for the targets at each Regarding the dependencies issues: yes, after reading a bit more drake documentation I understand that the issue should be already dealt with by Drake. I'll keep you posted. |
That sounds like an excellent proof of concept, and it should give you a lot of mileage immediately. And yes, parsing code is hard, but I think it will be worth your while later on:
When you get to that stage, the CodeDepends package by @gmbecker and @duncantl should take care the heavy lifting for you. I will try to get to #208 soon, and I look forward to learning how |
Hi. Sorry for pinging you on a closed issue. Just a quick question: since drake is apparently going through a major overhaul, do you think it would be better to use the github version for my experiments on this (or maybe even delay them to wait for a stable version, don't know...)? Thanks in advance. |
I would recommend the development version, especially after I fix #208. I originally meant to work on #237 first, but the benefits of #208 are more immediate, including an easier time developing When I close an issue, all that says is I no longer think there is a problem with |
Hi,
I was thinking that a possible nice "use case" for Drake could be using it to run the units tests for "R" packages. Currently, when I have to test a package, I use
devtools::test()
which automatically runs ALL tests, even if no changes occured wrt the last successful run on files involved in one particular test. If I want to run tests only on new/modfied functions, I need to manually runtest_file()
ortestthis::test_this()
on that specific file/function.I was therefore thinking that it would be nice to be able to run a "command" which automatically skips tests that "passed" in the past if no changes occurred and only runs :
This would considerably speed-up the testing for packages with an extensive set of unit tests.
Do you think this would be possible using Drake? Are there any particular drawbacks/things that would make it difficult?
Thanks in advance for any insights !
The text was updated successfully, but these errors were encountered: