Refactoring unit tests #335
Replies: 4 comments
-
I agree this is a good place to focus.
Documenting these expectations is a good place to start, but I was thinking of taking it one step further and creating a function or functions that check for these common requirements in every assessment. I guess we could use the framework in this link to write The more I think about it, the more I'm on the fence about it this "functionalization" step may or may not be worth the effort for v1.0. |
Beta Was this translation helpful? Give feedback.
-
The other main thing on my mind is leaning more heavily on the testing of |
Beta Was this translation helpful? Give feedback.
-
For an actionable next step, I suggest a quick "pilot" PR to refactor the test cases for |
Beta Was this translation helpful? Give feedback.
-
On a more general note, I think our testing framework is already way past the threshold needed for v1 QC, so my goals for this refactor are more about process than quality. here is to make the existing tests easier to maintain and update, and to set up a process framework that makes it relatively straightforward to spin up new tests. |
Beta Was this translation helpful? Give feedback.
-
Originally, I thought it would be a good idea to write a few helper functions to create unit tests for similar functions (e.g.,
*_Map_Raw()
or*_Assess()
. After doing some more research, it seems like we do have a good initial framework for unit tests, where eachtestthat
file contains tests that are specific to a function.tl;dr
my suggestion is to:
expect_snapshot_*
until the expected error is solidifiedHere are some ideas to plan for the future state of
{gsm}
, improve current unit tests, and document some process improvements and/or general guidelines:Use
expect_snapshot
until function is in steady-state:expect_snapshot
and associated "snapshot" tests can be useful for capturing an error message or other hard-to-document output, and saving it in a separate.md
file found intests\testthat\_snaps
main
release, or maybe even a major release (or whenever we determine), we could consider part of qualification or due-diligence that we refactor unit tests to expect specific warning or error messages. This would be somewhat tedious, but would only need to be done for high-risk and/or high-impact functions.Example:
Instead of including strings of expected error messages like this:
We can instead use:
Which will capture the most recent message and document in a separate
.md
file as:Document standard assumptions for a given type of function, and ensure unit tests exist for each function
*_Map
functions must contain tests for:The suggestion here is to add section to the Wiki page or document elsewhere a unit testing standard for each type of function.
Table below shows tests where
Count > 1
where we do have some consistency, but will likely benefit from documenting a standard:Resources:
dplyr
To create table above:
cc: @jwildfire @kodesiba @gwu05
Beta Was this translation helpful? Give feedback.
All reactions