-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[devops] Run tests with sqlite backend #6425
Conversation
1d32135
to
ce0a9a4
Compare
You can use this for the (aiida-py311) sph@invader:~/code/aiida/env/dev/aiida-core$ git stash show stash@{0} -p
diff --git a/src/aiida/tools/pytest_fixtures/__init__.py b/src/aiida/tools/pytest_fixtures/__init__.py
index 6181183d1..79d027f02 100644
--- a/src/aiida/tools/pytest_fixtures/__init__.py
+++ b/src/aiida/tools/pytest_fixtures/__init__.py
@@ -25,7 +25,7 @@ from .orm import (
aiida_localhost,
ssh_key,
)
-from .storage import config_psql_dos, postgres_cluster
+from .storage import config_psql_dos, config_sqlite_dos, postgres_cluster
__all__ = (
'aiida_code_installed',
diff --git a/src/aiida/tools/pytest_fixtures/storage.py b/src/aiida/tools/pytest_fixtures/storage.py
index d76c9b445..e216df664 100644
--- a/src/aiida/tools/pytest_fixtures/storage.py
+++ b/src/aiida/tools/pytest_fixtures/storage.py
@@ -2,6 +2,7 @@
from __future__ import annotations
+import pathlib
import typing as t
import pytest
@@ -85,3 +86,24 @@ def config_psql_dos(
return storage_config
return factory
+
+
+
+@pytest.fixture(scope='session')
+def config_sqlite_dos(
+ tmp_path_factory: pytest.TempPathFactory,
+) -> t.Callable[[str | None, str | None, str | None], dict[str, t.Any]]:
+ """Return a profile configuration for the :class:`~aiida.storage.sqlite_dos.backend.SqliteDosStorage`.
+
+ The factory has the following signature to allow further configuring the database that is created:
+
+ :param database_name: Name of the database to be created.
+ :returns: The dictionary with the storage configuration for the ``core.sqlite_dos`` storage plugin.
+ """
+
+ def factory(filepath: str | pathlib.Path | None = None) -> dict[str, t.Any]:
+ return {
+ 'filepath': str(filepath or tmp_path_factory.mktemp('test_sqlite_dos_storage'))
+ }
+
+ return factory
diff --git a/tests/conftest.py b/tests/conftest.py
index 55bf01a18..4c7f7f7d0 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -35,7 +35,7 @@ pytest_plugins = ['aiida.tools.pytest_fixtures', 'sphinx.testing.fixtures']
@pytest.fixture(scope='session')
-def aiida_profile(aiida_config, aiida_profile_factory, config_psql_dos):
+def aiida_profile(aiida_config, aiida_profile_factory, config_psql_dos, config_sqlite_dos):
"""Create and load a profile with ``core.psql_dos`` as a storage backend and RabbitMQ as the broker.
This overrides the ``aiida_profile`` fixture provided by ``aiida-core`` which runs with ``core.sqlite_dos`` and
@@ -43,7 +43,7 @@ def aiida_profile(aiida_config, aiida_profile_factory, config_psql_dos):
be run against the main storage backend, which is ``core.sqlite_dos``.
"""
with aiida_profile_factory(
- aiida_config, storage_backend='core.psql_dos', storage_config=config_psql_dos(), broker_backend='core.rabbitmq'
+ aiida_config, storage_backend='core.sqlite_dos', storage_config=config_sqlite_dos(), broker_backend='core.rabbitmq'
) as profile:
yield profile |
Thanks for the PR! This would be super useful for running tests in tox locally. Will try it out. |
@agoscinski thanks for taking a look! I will be working on finishing this up today so you might want to wait with testing till tomorrow. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6425 +/- ##
==========================================
+ Coverage 77.51% 77.80% +0.30%
==========================================
Files 560 562 +2
Lines 41444 41788 +344
==========================================
+ Hits 32120 32511 +391
+ Misses 9324 9277 -47
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@agoscinski to fix the SSH errors in your local setup, take a look at the Note that before running |
@@ -439,6 +439,10 @@ def test_launchers(): | |||
assert result == get_true_node() | |||
assert isinstance(node, orm.CalcFunctionNode) | |||
|
|||
|
|||
@pytest.mark.requires_rmq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test fails in rabbitmq tests, I don't know why. Perhaps something else from setup.sh
is needed?
FAILED tests/engine/test_process_function.py::test_submit_launchers - kiwipy.exceptions.UnroutableError: ('NO_ROUTE', 'aiida-d201dba33ae749d8a63dce82958ca171.process.queue')
https://github.com/aiidateam/aiida-core/actions/runs/9289159100/job/25562353186?pr=6425
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's now working, although I have no idea what I changed that made this work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a transient problem that props up now and again on GHA.
tests/orm/test_querybuilder.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sphuber do all these requires_pgsql
make sense to you in this query builder test suite?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kind of. Either has datetime comparisons, uses the has_key
or contains
filters, or relies on comparing literal SQL, which is of course not going to be identical between psql and sqlite. Don't understand why the test_iterall_*
tests would take so long. That seems like a smell that might be worth looking into further. These tests are also related to transaction handling, so again the transaction implementation might be causing some trouble
@sphuber @agoscinski apart from two small TODOs this is now ready for review / testing. EDIT: Both TODOs are now resolved. |
tests/orm/nodes/test_node.py
Outdated
# TODO: Why is this failing for SQLite?? | ||
# sqlalchemy.orm.exc.ObjectDeletedError: Instance '<DbNode at 0x7f6a68845990>' has been deleted, | ||
# or its row is otherwise not present. | ||
@pytest.mark.requires_pgsql |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My first guess would be that it might have something to do with the backend.transaction()
call on 863. Not sure if this is directly inherited from the psql_dos implementation but transactions may work different in sqlite. I would start looking there first
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I won't have time for digging into this probably. I'll create a separate issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is fine, but we will want to prioritize this because to me it suggests there might be a bug lurking with node deletion for sqlite profiles. It is very likely that this hasn't been tested properly yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened #6436
@pytest.mark.parametrize('with_broker', (True, False)) | ||
def test_presto(run_cli_command, with_broker, monkeypatch): | ||
"""Test the ``verdi presto``.""" | ||
def test_presto_without_rmq(pytestconfig, run_cli_command, monkeypatch): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it ended up being cleaner to split this test into two instead of parametrizing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Meh, I see a lot of code duplication, but fine, I will let it slide :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's the alternative
@pytest.mark.usefixtures('empty_config')
@pytest.mark.parametrize('with_broker', (pytest.param(True, marks=pytest.mark.requires_rmq), False))
def test_presto(pytestconfig, run_cli_command, with_broker, monkeypatch):
"""Test the ``verdi presto``."""
from aiida.brokers.rabbitmq import defaults
if not with_broker:
# Patch the RabbitMQ detection function to pretend it could not find the service
monkeypatch.setattr(defaults, 'detect_rabbitmq_config', lambda: None)
result = run_cli_command(verdi_presto)
assert 'Created new profile `presto`.' in result.output
with profile_context('presto', allow_switch=True) as profile:
assert profile.name == 'presto'
localhost = Computer.collection.get(label='localhost')
assert localhost.is_configured
if with_broker:
assert profile.process_control_backend == 'core.rabbitmq'
else:
assert profile.process_control_backend is None
All I see is if statements and complexity. 😛 (but honestly both feel fine)
Why does every PR that touched GHA end up with tens of commits? 😭 @sphuber this should now be ready. There might be some controversial changes to the test suite, happy to discuss. (see the updated OP) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @danielhollas Seems that the Lord finally answered your prayers and made the tests pass. Changes seem mostly good, just a few smaller comments
.github/workflows/ci-code.yml
Outdated
run: pytest -m 'presto' | ||
|
||
|
||
tests-sqlite: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You already mentioned in the OP that having this one as well might be overkill. What does this add on top of the tests
and tests-presto
? It is really just adding tests being run with SQLite and RMQ, right? I doubt there would be many situations where this combination is relevant. So I would be tempted to leave this one out for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is really just adding tests being run with SQLite and RMQ, right? I doubt there would be many situations where this combination is relevant.
Indeed. The idea was to not really test this combination per se, but to increase the code coverage when running with sqlite backend, under the assumption that many of the tests requiring rmq will also exercise the DB backend. Another minor advantage would be that it would ensure that the not requires_psql
marker keeps working.
I've temporarily enabled code coverage for presto and sqlite runs. The overall coverage for full tests suite is 77.8%, for presto it's 73.1% and 76.5% for sqlite.
I am fine either way. We could also achieve similar outcome by running the nightly rabbitmq tests with sqlite backend. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will remove tests-sqlite
for now since it seems like little coverage gain for a significant amount of resource cost.
@@ -52,6 +53,8 @@ def create_database( | |||
cluster = PGTest() | |||
cluster.create_database = create_database | |||
yield cluster | |||
except OSError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When does this happen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I am getting (don't have Postgres globally installed
OSError: b'initdb: error: file "/opt/conda/envs/aiida-core-services/share/postgres.bki" does not exist\ninitdb: hint: This might mean you have a corrupted installation or identified the wrong directory with the invocation option -L.\npg_ctl: database system initialization failed\n'
if not postgres_cluster: | ||
msg = 'Could not initialize PostgreSQL cluster' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not raise this in the postgres_cluster
fixture itself when you catch the OSError
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I guess I was trying to reduce the depth of stacktrace that pytest vomits when this happens. But it could hide what is the actual reason for the error. I've changed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I take that back --- this was a load-bearing hack, because this fixture is evaluated eagerly and before we could decide we want sqlite_dos
backend in aiida_profile
. I'd need to think about a nicer solution than this since this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a load-bearing hack
🤣 I love it
That makes sense that you have to fix it this way, but I wonder what will happen if someone uses the postgres_cluster
fixture. They will just get None
instead of an exception which won't be immediately obvious. Please just comment the reason for the OSError
and add a line in the docstring when it will return None
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, since this is a public fixture the hack is really not ideal. Since this is a new fixture that hasn't been released yet, should we try to change its API so that the cluster is created only when really needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have an idea of how, then yes I think that would be preferable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to refactor by encapsulating the cluster in a class / object. LMK if it makes sense.
If the only thing that users of this fixtures are supposed to interact with is the create_database
method, then I think this new approach makes sense.
@@ -439,6 +439,10 @@ def test_launchers(): | |||
assert result == get_true_node() | |||
assert isinstance(node, orm.CalcFunctionNode) | |||
|
|||
|
|||
@pytest.mark.requires_rmq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a transient problem that props up now and again on GHA.
tests/orm/nodes/test_node.py
Outdated
# TODO: Why is this failing for SQLite?? | ||
# sqlalchemy.orm.exc.ObjectDeletedError: Instance '<DbNode at 0x7f6a68845990>' has been deleted, | ||
# or its row is otherwise not present. | ||
@pytest.mark.requires_pgsql |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is fine, but we will want to prioritize this because to me it suggests there might be a bug lurking with node deletion for sqlite profiles. It is very likely that this hasn't been tested properly yet.
dc002b4
to
9fae660
Compare
The unit test suite currently automatically creates a test profile using the `core.psql_dos` storage plugin. The reason is mostly historical as originally the `core.psql_dos` storage was the only. The downside of this storage plugin, however, is that it requires a PostgreSQL database. This typically requires a PostgreSQL service to be running, making it more difficult to get a working development environment. This was made slightly easier through the use of `pgtest`, which would create a PSQL cluster on-the-fly, but this would still require postgres libraries to be installed. Nowadays, storage plugins are customizable and the `core.sqlite_dos` replaces PostgreSQL with the serviceless SQLite. The dependencies for these are a lot easier to install, making it perfect for testing. However, the test suite automatically configures a `core.psql_dos` profile and some tests are not compatible with SQLite since they directly depend on functionality that is only supported by PSQL and not SQLite. In this commit, changes are made that allow running the test with SQLite. Two new pytest markers are added to the test suite: * `requires_pgsql`: for tests that are known to require PostgreSQL (i.e. do not work with SQLite) * `presto`: an alias for `not requires_rmq and not requires_pgsql`. The tests are marked with `presto` automagically in `tests/conftest.py:pytest_collection_modifyitems` A new CI job is added `test-presto` that runs the tests with the `presto` marker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @danielhollas
The unit test suite currently automatically creates a test profile using the `core.psql_dos` storage plugin. The reason is mostly historical as originally the `core.psql_dos` storage was the only. The downside of this storage plugin, however, is that it requires a PostgreSQL database. This typically requires a PostgreSQL service to be running, making it more difficult to get a working development environment. This was made slightly easier through the use of `pgtest`, which would create a PSQL cluster on-the-fly, but this would still require postgres libraries to be installed. Nowadays, storage plugins are customizable and the `core.sqlite_dos` replaces PostgreSQL with the serviceless SQLite. The dependencies for these are a lot easier to install, making it perfect for testing. However, the test suite automatically configures a `core.psql_dos` profile and some tests are not compatible with SQLite since they directly depend on functionality that is only supported by PSQL and not SQLite. In this commit, changes are made that allow running the test with SQLite. Two new pytest markers are added to the test suite: * `requires_pgsql`: for tests that are known to require PostgreSQL (i.e. do not work with SQLite) * `presto`: an alias for `not requires_rmq and not requires_pgsql`. The tests are marked with `presto` automagically in `tests/conftest.py:pytest_collection_modifyitems` A new CI job is added `test-presto` that runs the tests with the `presto` marker.
The purpose of this PR is twofold:
sqlite_dos
as a storage backendTwo new pytest markers are added to the test suite:
requires_pgsql
for tests that are known to require PostgreSQL (i.e. do not work with SQLite)presto
which is an alias fornot requires_rmq and not requires_pgsql
. The tests are marked withpresto
automagically intests/conftest.py:pytest_collection_modifyitems
Two new CI jobs are added:
test-presto
runs withpytest -m presto
test-sqlite
runs withpytest -m
not requires_pgsql`.Perhaps we might retain only one of them?
In exchange for introducing two new ones, I removed the two verdi jobs and moved the corresponding tests (such as
verdi devel check-imports
after the normal test run. These are super fast and setting them up separately takes much more time then running them. Happy to discuss this though.