Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Acceptance Test Speedup #1028

Merged
merged 24 commits into from
Dec 22, 2021
Merged

Acceptance Test Speedup #1028

merged 24 commits into from
Dec 22, 2021

Conversation

cea2aj
Copy link
Member

@cea2aj cea2aj commented Dec 20, 2021

Improve Acceptance Test Speed

The primary reason for the slowdown is that Browserstack will run IE11 tests across many browserstack sessions. The firefox and safari tests wait for each test to pass in IE11, which is why there are long periodes of time where nothing is happening in the safari and firefox test videos. I am submitting a support ticket to browserstack to see if they can explain why this happens for IE11. Separating the tests into their own jobs prevents the safari and firefox tests from waiting for the IE11 video, which makes the browserstack videos much more useful.

For now, increasing the concurrency of IE11 runs helps alleviate this issue. I don't recommend we run IE11 with a concurrency less than 3 because it improves the test runtime to about 7 from 20-25. This puts us at 5 concurrent tests for non-dev branches. If Browserstack gets back with a fix, then we can decrease or remove the IE11 concurrency. So far we haven't ran into issues with too many running tests, but if it's a problem then we can consider decreasing the concurrency or not testing IE11 on so many branches.

Decreasing the zoom stabilization interval helped with performance by a few seconds overall.

Finally, I moved all the locator acceptance tests into one page because the request logger wasn't being setup properly with the two separate files. The request logger not working was causing the tests to take an additional 30 seconds because the waitOnSearchComplete would wait the full 10 seconds before proceeding. The issue was that SearchRequestLogger.createVerticalSearchLogger() was being called twice which created two separate RequestLoggers rather than one. When I reuse the same request logger across both test fixtures, the issue is resolved.

The browser in safari would sometimes not be wide enough for the full page map tests, so I set a fixed value which keeps the map in desktop mode.

J=1756
TEST=auto

Confirm tests all still pass

See that browserstack test speed improves from 1min 15seconds to about 45 seconds locally. See that browserstack speed improves from over 20 minutes to about 7.5 minutes for IE11.

@coveralls
Copy link

coveralls commented Dec 20, 2021

Coverage Status

Coverage decreased (-0.01%) to 8.828% when pulling 7f7ed9c on dev/acceptance-speedup-2 into c12bb51 on develop.

@cea2aj cea2aj changed the title Acceptance Test Speedup (#2) Acceptance Test Speedup Dec 21, 2021
.github/run_browserstack_acceptance.sh Outdated Show resolved Hide resolved
.github/workflows/acceptance_headless.yml Show resolved Hide resolved

fixture`Vertical Full Page Map`
.page(`http://localhost:${PORT}/locations_full_page_map`)
.page(`http://localhost:${PORT}/locations_full_page_map_with_filters`)
Copy link
Contributor

@yen-tt yen-tt Dec 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there not a lot of differences between locations_full_page_map and locations_full_page_map_with_filters page? I'm not sure if we should just stop running tests on locations_full_page_map. Does it fail if you have two separate fixtures, using different url, in the same file?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pages are exactly the same except the _with_filters page has filters enabled. All of the map functionality tested on the locations_full_page_map page should pass on the locations_full_page_map_with_filters page. I can check if two separate fixtures work in the same file, but I don't see a need for it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multiple fixtures worked fine, but I learned that I needed to re-use the same verticalSearchLogger or else the request logger wouldn't work

Copy link
Contributor

@yen-tt yen-tt Dec 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the fixtures run concurrently? I'm wondering if using the same verticalSearchLogger could be misleading if one fixture receive response triggered from another fixture and just move on to the rest of test without waiting on its actual search request.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fixtures run sequentially so I don't think that's an issue

@cea2aj cea2aj requested a review from yen-tt December 22, 2021 16:01
@cea2aj cea2aj merged commit c79e6ff into develop Dec 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants