-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adapted the rbootnoise tests into the context #32
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
….5e-8 deviations, for example caused by expected rounding errors between different systems.
…s due to the unavoidable technical variation without carefully controlled containerized environment not applicable in the R CMD Checks.
… were compared. This somewhat useless test caused absurd behaviour with the deviation tolerance of the all.equal() (different data gave TRUE). Also increased the remaining tolerances of other tests.
…ething major has changed in the background, such as how the set.seed() works. https://stackoverflow.com/questions/47199415/is-set-seed-consistent-over-different-versions-of-r-and-ubuntu
…exactly the same way. Its tolerance limit was increased further. The new principles of these tests were explained in the earlier commit.
…ests explained in the earlier commit.
…ow the set.seed() exactly
…ata to hard drive and loading them back into memory. If these nominal differences are the reason, let's reject those. Thanks to excellent other tests of the lmeresampler, the carry-over of attributes within an R session should be okay.
…sts is to check the numeric reporducibility within the accepted deviation. The numeric-deviation limit decreased back to lower level and all.equal() removed.
…he problem, also in this context. It was removed and focus put on the acceptable numeric reproduction of the results considering the cross-platform non-containerized context.
Tuning tests
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The failing rbootnoise R CMD Checks are now fixed. The previous tests were overkill considering the cross-platform context of the R CMD checks, which fundamentally cannot utilize an individual highly reproducible container. The purpose of the updated rbootnoise tests is to perform rough comparisons tolerating relatively large deviations from the earlier-acquired reference data. The tests try to catch possible large deviations caused by significant technical issues. Without a highly controlled individual container environment (not applicable in the context of cross-platform R CMD Checks) the exact technical reproducibility cannot be established. For example, it is known that even the different versions of R can cause varying behaviour in the fundamental set.seed(), even on the same underlying system, an unavoidable technical curiosity accepted by the R community.
https://stackoverflow.com/questions/47199415/is-set-seed-consistent-over-different-versions-of-r-and-ubuntu
These technical variations are especially problematic with the individual replicates, which cannot stabilize in longer run. Thus, exact reproduction of data in this context is not expected when such is not warranted.