API tests are failing, add sleep #7020 #7022
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it:
The API test suite has been failing for a while. We decided to try adding sleep statements until we have a better fix in #6865.
Which issue(s) this PR closes:
Relates to #7020 but Jenkins will determine if the API test suite passes or not. We can keep an eye on https://jenkins.dataverse.org/job/IQSS-Dataverse-Develop-PR/
Special notes for your reviewer:
The failures are fairly random. I added sleeps in two places because they were repeated over many runs. Due to the randomness of the failures, I don't have a lot of confidence that this pull request will get us toward our goal of a passing API test suite.
I wrote down runs that produced failures. There are 8 total but I'd say another 4-5 runs didn't have any failures at all. For this reason I began stopping and starting Payara between each run, thinking that maybe everything is more warm after a run and less likely to fail. Only one run, the first, had multiple failures. Runs are separated by an extra newline:
[ERROR] DatasetsIT.testAddRoles:1197 expected:<200> but was:<500>
[ERROR] FilesIT.testReplaceFileBadJson:898 Expected status code <200> doesn't match actual status code <500>.
[ERROR] FilesIT.test_006_ReplaceFileGood:340 Expected status code <200> doesn't match actual status code <500>.
[ERROR] FilesIT.test_007_ReplaceFileUnpublishedAndBadIds:748 Expected status code <200> doesn't match actual status code <500>.
[ERROR] FilesIT.test_006_ReplaceFileGood:340 Expected status code <200> doesn't match actual status code <500>.
[ERROR] DatasetsIT.testLinkingDatasets:2006 Expected status code <200> doesn't match actual status code <500>.
[ERROR] FilesIT.testReplaceFileBadJson:900 Expected status code <200> doesn't match actual status code <500>.
[ERROR] FilesIT.test_008_ReplaceFileAlreadyDeleted:823 Expected status code <200> doesn't match actual status code <500>.
[ERROR] DatasetsIT.testCreateDeleteDatasetLink:1730 Expected status code <200> doesn't match actual status code <500>.
[ERROR] FilesIT.testReplaceFileBadJson:900 Expected status code <200> doesn't match actual status code <500>.
Suggestions on how to test this:
Jenkins will test it. To test manually, spin up an EC2 instance similar to what Jenkins spins up. That's what I did.
Does this PR introduce a user interface change? If mockups are available, please link/include them here:
No.
Is there a release notes update needed for this change?:
No.
Additional documentation:
No.