-
Notifications
You must be signed in to change notification settings - Fork 129
Integration Tests
David Newswanger edited this page Mar 25, 2022
·
2 revisions
The tests on the tests/integration
runs using pytest and are meant to be tests that interact with the product in a higher level via API, web UI or using user CLI such as pulp-cli
- call API endpoints using
api_client
provided by fixtures - call API endpoints using any other api client if needed
- run
pulp-cli
commands
- DO NOT import objects from galaxy_ng project
- DO NOT import objects from any of pulp_ projects
- DO NOT import objects any other testing on /tests/
- DO NOT run django management commands or any other python level script
- DO NOT make direct access to the database or running hub filesystem
In other words, integration tests are to run at the same level as a user or external system consuming the project via its public APIs.
- Run your system normally, e.g:
./compose up
- With system running set the testing admin token with
make docker/loadtoken
andmake docker/load_test_data
that will set auth token to a fixed known value for tests and initialize test data. - export environment variable
export HUB_LOCAL=1
- Run integration tests using pytest via
RUN_INTEGRATION
help script.- This command can accept any argument accepted by
pytest
- Running all standalone tests:
dev/common/RUN_INTEGRATION.sh -sv --log-cli-level=DEBUG
- Running only specific set od tests:
dev/common/RUN_INTEGRATION.sh -sv --log-cli-level=DEBUG "-m standalone_only" -k mytest
- This command can accept any argument accepted by
- Based on the
feature/bugfix
you are working on find an appropriate module undertests/integration
or create a new one if needed. - Write pytest style test cases
- Use the appropriate markers found on
tests/integration/conftest.py
and if needed add new markers to the list - Reuse fixtures available on
tests/integration/conftest.py
and if needed add new fixtures - Reuse utilities from
tests/integration/utils.py
and add more utilities if needed
tests/integration/test_namespace_foo_exists.py
import pytest
from ansible.errors import AnsibleError
from ..utils import get_client
from ..utils import generate_unused_namespace
from ..utils import get_all_namespaces
@pytest.mark.namespace
@pytest.mark.cloud_only
def test_namespace_foo_exists(ansible_config):
"""Ensure `foo` is an existing namespace"""
config = ansible_config("ansible_partner")
api_client = get_client(config, request_token=True, require_auth=True)
try:
resp = api_client('/api/automation-hub/v3/namespaces/foo/')
except AnsibleError as e:
if e.http_code == 404:
pytest.fail(f"Namespace foo doesn't exist")
else:
raise e
assert resp["name"] == "foo"
- Tests aimed to run only on specific development mode must have either
@pytest.mark.cloud_only
or@pytest.mark.standalone_only
Sponsored by Red Hat, Inc.