Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More advanced unittests (eg of analysis and generate steps) fail when run as GitHub action #340

Closed
carlhiggs opened this issue Jul 4, 2023 · 3 comments

Comments

@carlhiggs
Copy link
Collaborator

carlhiggs commented Jul 4, 2023

Describe the bug
As per issue #337, I attempted to include a more robust suite of tests to evaluate analysis, generating resources and comparison of regions as a sensitivity analysis.

However, I ended up commenting out most of the new tests as they failed when implemented as part of a GitHub actions workflow despite passing locally.

I suspect its to do with some kind of parallel processing, with the tests not running sequentially. I attempted to force them to run sequentially, but things still failed (and after looking at the test log, I'm not convinced it actually did run sequentially).

carlhiggs added a commit that referenced this issue Jul 4, 2023
attempt to address #340 by restricting max-workers to 1 (attempting to edit online as local commit push failed with alert of limitations of personal access token for editing workflows)
carlhiggs added a commit that referenced this issue Jul 4, 2023
…o get more advanced unittests to work in GitHub actions, as parallelisation may be causing problems; have to commit to see if this will work!)
carlhiggs added a commit that referenced this issue Jul 4, 2023
reverting because setting max-workers didn't resolve the issue #340
carlhiggs added a commit that referenced this issue Jul 4, 2023
…ility for test running (ie. without having to run these as subprocesses, in case this helps address #340)
carlhiggs added a commit that referenced this issue Jul 4, 2023
…rhood analysis code that was previously used in debugging)
@carlhiggs
Copy link
Collaborator Author

re this, I had some success with the ordering, just by making sure the unittests were named alphanumerically sequenced (so, test_1..., test_2... etc).

But the error now seems to be internal communication between the docker containers -- db connection fails (oh, that should be a test in itself!)
https://github.com/global-healthy-liveable-cities/global-indicators/actions/runs/5451743372/jobs/9918347325?pr=341

======================================================================
ERROR: test_4_example_analysis (tests.tests.tests)
Analyse example region.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/ghsci/process/tests/tests.py", line 57, in test_4_example_analysis
    r._create_database()
  File "/home/ghsci/process/subprocesses/ghsci.py", line 144, in _create_database
    create_database(self.codename)
  File "/home/ghsci/process/subprocesses/_00_create_database.py", line 31, in create_database
    conn = psycopg2.connect(
  File "/env/lib/python3.10/site-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "gateway.docker.internal" (172.17.0.1), port 5433 failed: Connection refused
	Is the server running on that host and accepting TCP/IP connections?

@carlhiggs
Copy link
Collaborator Author

The last bit --- adding the database health checks and dependencies did the trick!

The workflow takes a while -- probably too long; could probably cut some corners (eg. probably don't actually have to do the full sensitivity analysis -- could duplicate results, modify them and pretend we did, then compare. That would cut time in half.

carlhiggs added a commit that referenced this issue Jul 4, 2023
@carlhiggs
Copy link
Collaborator Author

The tests basically work, and sensitivity analysis only fails in a technicality --- comparison of identical areas triggers system.Exit() advising the inputs are identical. So, the cheeky work around didn't quite work --- to work it needs some numbers jiggled around...

carlhiggs added a commit that referenced this issue Jul 4, 2023
…parison (cast the reference df to ints, so by rounding there is a difference); as per #340, the sensitivity analysis test is now fast and passes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant