-
Notifications
You must be signed in to change notification settings - Fork 2
Test Scenarios Description
The Test Suite is organized around a series of Test Scenarios chaining a set of Test Cases.
The Test Scenarios are designed as a sequence of basic Test Cases covering one or several functionalities of the Target Site in order to reproduce a typical user operation on the target site.
The following table describes each Test Scenario vided in 2 groups:
- Local Test Scenarios are locally executed measuring metrics of the various functions from the local Test Site towards the Target Site. They are run using the cdab-client command line tool.
- Remote Test Scenarios are remotely executed on a virtual machines within the service providers' cloud infrastructure (when available) measuring metrics of the various functions directy within the Target Site. They are run using the cdab-remote-client command line tool.
Test Scenario 15 (TS15) covers several end-to-end scenarios which are independent of each other.
The following sections give a short overview about the simple test cases and how they can be configured and run.
This test performs multiple concurrent remote HTTP web requests to the front endpoint of the target site. It measures, among other metrics, the average and peak response times.
This test performs a simple filtered search (e.g. by mission or product type) and verfies whether the results match the specified search criteria. The test client sends multiple concurrent remote HTTP web requests to the front OpenSearch API of the target site using the OpenSearch mechanism to query and retrieve the search results. Searches are limited to simple filters (no spatial nor time filters) established randomly on the missions dictionary.
Among the obtained metrics are the average and peak response times, the number of results and the size of the responses.
This test performs a more complex filtered search (e.g. by geometry, acquisition period or ingestion date) and verifies whether the results match the specified search criteria. The test client sends multiple concurrent remote HTTP web requests are sent to the front catalogue search API (preferably OpenSearch API) of the target site using the search mechanism to query and retrieve the search results. N queries are prepared with all filters (spatial and time filters included) and composed with random filters from the missions dictionary.
The obtained metrics are the same as in TC201.
This test performs a specific filtered search (e.g. geometry, acquisition period, ingestion date) with many results pages and verfies whether the results match the specified search criteria. The test client sends multiple concurrent remote HTTP web requests to the front OpenSearch API of the target site using the OpenSearch mechanism to query and retrieve the search results over many results pages. The search filters are fixed (a moving window in time).
The obtained metrics are the same as in TC201.
This test performs a simple filtered search (e.g. by mission or product type) for querying only offline data and verifies whether the results match the specified search criteria. The test client sends multiple concurrent remote HTTP web requests are sent to the front catalogue search API (preferably OpenSearch API) of the target site using the search mechanism to query and retrieve the search results. N queries are prepared with simple filters (no spatial nor time filters) plus a specific filter to select offline data only. They are composed with random filters from the missions dictionary.
The obtained metrics are the same as in TC201.
This test is the remote version of TC201. Its results are derived from executing TC201 from a virtual machine on the target provider's cloud (if the provider offers processing infrastructure).
This test is the remote version of TC202. Its results are derived from executing TC202 from a virtual machine on the target provider's cloud (if the provider offers processing infrastructure).
This test evaluates the download service of the target site for online data. The test client makes a single remote download request to retrieve a product file via a product URL.
Among the metrics the test obtains is the throughput of the downloaded data.
This test evaluates the download capacity of the target site for online data using it maximum concurrent download capacity. It is the same as TC301 with as many concurrent download as the configured maximum allows.
The obtained metrics are the same as in TC301.
This test evaulates the download capacity of the target site for downloading data in bulk. It is the same as TC301 with as many download as the systematic search (TC213) returned.
The obtained metrics are the same as in TC301.
This test evaluates the capacity of the target site for downloading offline data. The test client sends multiple concurrent remote HTTP web requests to retrieve one or several product files from a set of selected URLs that are pointing to offline data.
The obtained metrics are the same as in TC301 and also include the latency for the availability of offline products.
This test is the remote version of TC301. Its results are derived from executing TC301 from a virtual machine on the target provider's cloud (if the provider offers processing infrastructure).
This test is the remote version of TC302. Its results are derived from executing TC302 from a virtual machine on the target provider's cloud (if the provider offers processing infrastructure).
This test measures the cloud services capacity of the target site for provisioning a single virtual machine. The test client sends a remote web request using the cloud services API of the target site to request a typical virtual machine. Once the machine is ready, the test client executes a command within a docker container to start TC211 and TC311.
The obtained metrics include the provisioning latency and and the process duration as well as information about the virtual machine configuration and related costs.
This test measures the cloud services capacity of the target site for provisioning multiple virtual machines. The test client sends remote web requests using the cloud services API of the target site to request N typical virtual machines. Once a machine is ready, the test client executes a command within a docker container to start TC212 and TC312.
The obtained metrics are the same as in TC411.
This test measures the cloud services capacity of the target site for provisioning virtual machines with the capability of running data-transforming algorithms. The test client sends a remote web requests using the cloud services API of the target site to request a typical virtual machine. Once the machine is ready, the test client executes a command within a docker container to download a test product and run an algorithm to produce one or more outputs from it.
The obtained metrics are the same as in TC411.
This tests evaluates the cloud services capacity of the target site for provisioning virtual machines with the capability of running data-transforming pre-defined data-transforming algorithms in typical EO applications. The test client sends a request using the cloud services API of the target site to request a typical virtual machine. Once the machine is ready, the test case stages in its input data, executes the application matching the end-to-end scenario and compiles information about the success of the execution and related metrics.
Note: This test case is unlike the others because it runs different tests based on the end-to-end scenario use case for which it is called. They all are part of TS15, but cover very different applications and need to be considered separately and compared only between the same end-to-end scenario. The end-to-end scenario used is specified by its numeric ID, so for use case 1 (end-to-end scenario 1) the scenario ID would be TS15.1
The obtained metrics are the same as in TC411.
This test case evaluates the catalogue coverage of a target site by collection. The test client sends multiple concurrent catalogue requests to retrieve the total number of products for all the possible combinations of filters in configuration input. When timeliness is applicable on a collection, the search excludes the time critical items (e.g. NRT, STC)
The obtained metrics include information about the number of results and coverage percentage.
This test case evaluates the local data coverage of a target site for all product types collection. The test client sends multiple concurrent catalogue requests to retrieve the total number of online and offline products for all the possible product types.
The obtained metrics are the same as in TC501.
This test evaluates the local data consistency of a target site by data offer collection. The test client sends multiple concurrent catalogue requests to retrieve the total number of online and offline products for all the possible product types.
The obtained metrics are the same as in TC501.
This test evaluates the data latency of a target site by collection. The test client sens multiple concurrent catalogue requests to retrieve the latest products per collection and compare their data publication time to the sensing time. A timeliness is applied on a collection when applicable to limit the search to the time critical items (e.g. NRT, STC).
The obtained metrics include the average and maximum data operational latency and information about result quality.
This test evaluates the availability data latency of a target site by collection with respect to a reference target site. The test client sends multiple concurrent catalogue requests to retrieve the latest products per collection and compare their data publication time to the sensing time. When a timeliness is applicable on a collection, searches are excluding the time critical items (e.g. NRT, STC).
The obtained metrics include the average and maximum data availability latency and information about result quality.
This test evaluates the upload performance on a target site’s storage. The test client randomly generates a large file uploads it to a newly created storage.
The obtained metrics include the data throughput, i.e. upload speed.
Test and evaluate the upload performance on a target site’s storage The test client randomly generates a large file uploads it to a newly created storage. Typical transmission performance metrics are recorded during the process.
The test client downloads the file uploaded in TC701 from the storage which is deleted upon completion.
The obtained metrics include the data throughput, i.e. download speed.