From 7fd33f3d8c1b1f222773be15460360d1e475c418 Mon Sep 17 00:00:00 2001 From: floeschau Date: Tue, 10 Nov 2020 21:26:01 +0100 Subject: [PATCH 1/6] Main documentation outline --- software-documentation.md | 240 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 240 insertions(+) create mode 100644 software-documentation.md diff --git a/software-documentation.md b/software-documentation.md new file mode 100644 index 0000000..86e3c00 --- /dev/null +++ b/software-documentation.md @@ -0,0 +1,240 @@ +# Software repository + +## General + +The purpose of the CDAB software suite is to allow users to perform automated tests on DIAS. + +The test scenarios that can be executed are listed below in this document. + +There are two tools test + +* **cdab-client**, a .NET/mono based tool to perform test scenarios TS01 to TS07 locally on the machine from which they run. +* **cdab-remote-client**, a Python-based tool to perform test scenarios TS11 to TS15, which require a virtual machine on the provider. + +## Prerequisites + +The software can be used on all modern versions of **Linux**, **macOS** and **Windows**. + +For **cdab-client** + +You need to install the latest version of **Mono**, the cross-platform open-source .NET framework. + +For an efficient use of the tool it is recommended to use **Visual Studio Code** with the following extensions: +* C# +* Mono Debug +* .NET Core TestExplorer + + +For **cdab-remote-client** + +You need **Python** (at least version 3.6) and the Python package *python-openstackclient* (to be able to use the OpenStack interfaces of the DIAS providers). + + +## Configuration + +Both tools rely on a configuration file in YAML format in which information about the various providers is configured. The YAML format is hierarchical and +The software suite contains a sample configuration that is ready to use apart from the user credentials which have to be replaced with correct ones. + + +The following sections explain he configuration which consists of two main nodes: + +* global: Contains settings that apply globally, over different service providers and test scenarios. +* service_providers: Contains specific sections for individual DIASes. + + +### `global` node + +The following settings are related to catalogue searches and apply across providers. + +* `reference_target_site`: +* `country_shapefile_path`: Path to the shapefile containing country borders; used for making queries for coutry coverage. + +The following settings are related to processing-related test scenarios and apply across providers. + +* `docker_config`: Docker configuration file to be used for docker repository connections on virtual machine. +* `connect_retries`: Number of attempts to connect to virtual machines via SSH. +* `connect_interval`: Interval between attempts to connect to virtual machines via SSH (in seconds, fraction also possible). +* `ca_certificate`: An array of optional certificates to be installed on virtual machines used for processing. +* `max_retention_hours`: The number of hours after which previously created virtual machines are considered idle (in case they were not deleted properly during test execution, check takes place before creating a new VM on a service provider). + + + +### `service providers` section + +Under this node many different configurations for DIASes or similar providers can be attached. Each of them has a root node that should be named after the provider. Under that root node there are several keys for settings related to that provider. + +* `max_catalogue_thread`: Maximum number of thread querying the target catalogue service in parallel. +* `max_download_thread`: Maximum number of thread downloading the target download service in parallel. + + +* `data`: A node with further settings related to catalogue search and download (see below). +* `compute`: A node with further settings related to remote execution of test scenarios (see below). +* `storage`: A node with further settings related to storage of user-produced data (see below). + +The following sections explain the more complex nodes more in detail. + + +#### `data` node + +These settings regard the data offering of each provider. They can be complicated, especially the definition of collections. It is recommeded to use sample configuration file the and adjust only the credentials. These settings are relevant for all scenarios that perform searches or downlooads. + +TODO Explain settings + +#### `compute` node + +These settings configure the processing within the cloud infrastructure of the service providers. They are relevant for TS11, TS12, TS13 and TS15. + +* **connector**: The value has to be *openstack* to which it defaults (making it optional in this case). +* **auth_url**: Authentication access point. Obtain value from `auth_url` key in *clouds.yaml*. +* **username**: Cloud username (same username as for access to the OpenStack dashboard). +* **password**: Cloud password (password for user). +* **project_id**: Project ID. Obtain value from `project_id` key in *clouds.yaml*. This setting is optional. +* **project_name**: Project name. Obtain value from `project_name` key in *clouds.yaml*. +* **user_domain_name**: User domain name. Obtain value from `user_domain_name` key in *clouds.yaml*. +* **region_name**: Authentication region name. Obtain value from `region_name` key in *clouds.yaml*. This setting is optional. +* **interface**: Interface. Obtain value from `interface` key in *clouds.yaml*. +* **identity_api_version**: Identity API version, Obtain value from `identity_api_version` key in *clouds.yaml*. +* **volume_api_version**: Volume API version (set value to *2* if version *3* is not supported). +* **key_name**: Name of predefined public key for SSH connection to new virtual machine. On the OpenStack dashboard, key pairs can be created on the OpenStack dashboard and the private key can be downloaded, check under *Compute > Key Pairs*. +* **image_name**: Name of image to be used for new virtual machine. On the OpenStack dashboard, choose from *Compute > Images*. +* **flavor_name**: Name of flavour (hardware characteristics) for new virtual machine. On the OpenStack dashboard, check under *Compute > Instances > Launch Instance > Flavours/Flavors*. The value can also be an array, e.g. *['flavourA', 'flavourB']* +* **network_name**: Name of network to which new virtual machine is connected. On the OpenStack dashboard, check under *Network > Networks*. This setting is optional. The value can also be an array, e.g. *['networkA', 'networkB']* +* **security_group**: Name of security group for new virtual machine (optional; this setting might be necessary in order to permit remote access to virtual machines). On the OpenStack dashboard, check under *Network > Security groups*, if available. This setting is optional. +* **floating_ip**: Explicitly assign floating IP (set this to *True* if public IP addresses are not assigned automatically at the creation of a virtual machine and otherwise to *False*). On the OpenStack dashboard, check under *Network > Floating IPs*, if available. +* **floating_ip_network**: Network from which to assign floating IP. This setting is optional. +* **private_key_file**: Location of the private key file for SSH connections to virtual machine (must correspond to public key in **key_name**). +* **remote_user**: User on virtual machine for SSH connections. +* **use_volume**: Create an external volume for docker image and test execution; this is useful for flavours that have very limited main disk. The size of the additional disk is 20 GB. +* **private_key_file**: Location of the private key file for SSH connections to virtual machine (must correspond to public key in **key_name**). +* **remote_user**: User on virtual machine for SSH connections. +* **download_origin**: Value of the environment variable `DOWNLOAD_ORIGIN` for the Docker image execution on the virtual machine. +* **vm_name**: Preferred name of virtual machines to be created (sequential number is appended). +* **key_name**, **image_name**, **flavor_name**: these are mandatory for all types of providers, but differ among them, see the explanations in the individual sections. +* **cost_monthly**: Monthly cost for VM of specified flavour (instance type or machine type, see sections below). The default is *0*. If there is more than one flavour, the value has to be an array of the same size. +* **cost_hourly**: Hourly cost of VM of specified flavour. The default is *0*. If there is more than one flavour, the value has to be an array of the same size. +* **currency**: Payment currency. The default is *EUR* + + +#### `storage` node + +These settings configure storage access for the service providers. They are relevant for TS07. + +TODO Explain settings + + +## Test scenarios + +The test scenarios execute a sequence of basic test cases which are explained in the following section + +The following table shows the test scenarios that access the service provider from the user's machine. They are run using **cdab-client**. + +| Scenario ID | Title | Test case sequence | +| TS01 | Simple data search and single download | TC101 → TC201 → TC301 | +| TS02 | Complex data search and bulk download | TC101 → TC202 → TC302 | +| TS03 | Systematic periodic data search and related remote data download | TC203 → TC303 | +| TS04 | Offline data download | TC204 → TC304 | +| TS05 | Data Coverage Analysis | TC501 → TC502 | +| TS06 | Data Latency Analysis | TC601 → TC602 | +| TS07 | Storage Upload and Download Performance | TC701 → TC702 | + +The following table shows the test scenarios that run on virtual machines within the service providers' cloud infrastructure. They are run using **cdab-remote-client**. + +| Scenario ID | Title | Test cases | +| TS11 | Cloud services simple local data search and single local download on single virtual machines | TC411 (→ #TC211 → #TC311) | +| TS12 | Cloud services complex local data search and multiple local download on multiple virtual machines | #TC412 (→ #TC212 → #TC312) | +| TS13 | Cloud services simple local data search, download and simple processing of downloaded data | TC413 | +| TS15 | Cloud services processing of specific workflows | TC415 (→ TC416) | + +Test scenario 15 (TS15) covers several end-to-end scenarios which are independent of each other. + + +## Test cases + +The following sections give a short overview about the simple test cases and how they can be configured and run. + +### TC101: Service Reachability + +### TC201: Basic query + +### TC202: Complex query (geo-time filter) + +### TC203: Specific query (handle multiple results pages) + +### TC204: Offline data query + +### TC211: Basic query from cloud services + +### TC212: Complex query from cloud services + +### TC301: Single remote online download + +### TC302: Multiple remote online download + +### TC303: Remote Bulk download + +### TC304: Remote Bulk download + +### TC311: Single remote online download from cloud services + +### TC312: Multiple remote online download from cloud services + +### TC411: Cloud Services Single Virtual Machine Provisioning + +### TC412: Cloud Services Multiple Virtual Machine Provisioning + +### TC413: Cloud Services Virtual Machine Provisioning for Processing + +### TC415: Automated Processing of End-to-End Scenario of Specific Applications + +### TC501: Catalogue Coverage + +### TC502: Local Data Coverage + +### TC503: Data Offer Consistency + +### TC601: Data Operational Latency Analysis [Time Critical] + +### TC602: Data Availability Latency Analysis + +### TC701: Data Storage Upload Analysis + +### TC702: Data Storage Download Analysis + + +# Usage and processing steps of the tools + +## cdab-client + +TODO usage, arguments + +**cdab-remote-client** performs the following steps: + +TODO: +... + +## cdab-remote-client + +TODO usage, arguments + +**cdab-remote-client** performs the following steps: + +* Check the command line arguments, configuration and selected test scenarios and starts the test making sure that there is no misconfiguration. +* The cloud environment to be used is obtained from the value of the `-sp` option which determines the service provider section in the configuration file to be used (values are taken from its **compute** subsection). +* The target site parameters to be used are obtained from the value of the `-te` and `tc` options. Alternatively the `-ts` option is used; it determines the service provider section in the configuration file to be used (values are taken from its **data** subsection). +* If configured via the **floating_ip** key in the main configuration file, get the list of available floating IP addresses and make sure they are sufficient to perform all tests in parallel. +* Delete old virtual machines no longer in use according to the **max_retention_hours** global setting. +* Start a new thread for each requested virtual machine (`-vm` option) and do the following in parallel for each: + + * Create the virtual machine (using the `openstack server create` command or an equivalent for other providers) + * If configured via the **floating_ip** key in the main configuration file, assign a floating IP address to the virtual machine (using the `openstack server add floating ip` command or an equivalent for other providers if applicable). + * If configured via the **use_volume** key, create and attach the volume to the virtual machine (using the `openstack volume create` and `openstack server add volume` commands or equivalents for other providers if applicable) and partition, format and mount the volume. + * Install Docker and start the Docker service (in case the key **use_volume** was set to *True*, change the local docker repository location to the new volume. + * Transfer the Docker authentication file in order to be able to authenticate with the Terradue Docker repository. + * Install the testing suite image containing the **cdab-client** tool (or other images). + * Run the test scenario based on the command-line arguments, configuration settings and mapping of remote test scenarios onto **cdab-client** scenarios or other testing executables. + * After conclusion extract the result files (*TS\*Results.json* and *junit.xml*) from the Docker container and download it. + * If configured via the **use_volume** key, detach the volume from the virtual machine and delete it (using the `openstack server remove volume` and `openstack volume delete` commands or equivalents for other providers if applicable). + * Delete the virtual machine (using the `openstack server delete` command or an equivalent for other providers). + +* When all threads have completed, calculate the metrics described above and produce a *TS\*Results.json* file containing the information about the executed test scenario and an updated *junit.xml*. + From 2bd79590015f661f1fb3be916ac47b6a7fe77e37 Mon Sep 17 00:00:00 2001 From: floeschau Date: Tue, 10 Nov 2020 21:27:19 +0100 Subject: [PATCH 2/6] Main documentation outline (2) --- software-documentation.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/software-documentation.md b/software-documentation.md index 86e3c00..0ae1917 100644 --- a/software-documentation.md +++ b/software-documentation.md @@ -128,14 +128,14 @@ The test scenarios execute a sequence of basic test cases which are explained in The following table shows the test scenarios that access the service provider from the user's machine. They are run using **cdab-client**. -| Scenario ID | Title | Test case sequence | -| TS01 | Simple data search and single download | TC101 → TC201 → TC301 | -| TS02 | Complex data search and bulk download | TC101 → TC202 → TC302 | -| TS03 | Systematic periodic data search and related remote data download | TC203 → TC303 | -| TS04 | Offline data download | TC204 → TC304 | -| TS05 | Data Coverage Analysis | TC501 → TC502 | -| TS06 | Data Latency Analysis | TC601 → TC602 | -| TS07 | Storage Upload and Download Performance | TC701 → TC702 | +Scenario ID | Title | Test case sequence +TS01 | Simple data search and single download | TC101 → TC201 → TC301 +TS02 | Complex data search and bulk download | TC101 → TC202 → TC302 +TS03 | Systematic periodic data search and related remote data download | TC203 → TC303 +TS04 | Offline data download | TC204 → TC304 +TS05 | Data Coverage Analysis | TC501 → TC502 +TS06 | Data Latency Analysis | TC601 → TC602 + TS07 | Storage Upload and Download Performance | TC701 → TC702 The following table shows the test scenarios that run on virtual machines within the service providers' cloud infrastructure. They are run using **cdab-remote-client**. From c7197dd7fc44de8f8017c5435e361e42238578dc Mon Sep 17 00:00:00 2001 From: floeschau Date: Wed, 11 Nov 2020 08:40:36 +0100 Subject: [PATCH 3/6] Main documentation outline (3) --- software-documentation.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/software-documentation.md b/software-documentation.md index 0ae1917..cd12e71 100644 --- a/software-documentation.md +++ b/software-documentation.md @@ -128,7 +128,9 @@ The test scenarios execute a sequence of basic test cases which are explained in The following table shows the test scenarios that access the service provider from the user's machine. They are run using **cdab-client**. + Scenario ID | Title | Test case sequence +-----|-----|----- TS01 | Simple data search and single download | TC101 → TC201 → TC301 TS02 | Complex data search and bulk download | TC101 → TC202 → TC302 TS03 | Systematic periodic data search and related remote data download | TC203 → TC303 From d39fbdebfcc5d6c3f494b7b7dc6387306bf7c227 Mon Sep 17 00:00:00 2001 From: Emmanuel Mathot Date: Wed, 11 Nov 2020 15:55:07 +0100 Subject: [PATCH 4/6] review readme --- README.md | 28 +++++++++++----------------- src/cdab-client/config.sample.yaml | 9 ++------- 2 files changed, 13 insertions(+), 24 deletions(-) diff --git a/README.md b/README.md index c3481dd..71ddd87 100644 --- a/README.md +++ b/README.md @@ -5,35 +5,29 @@ # CDAB Software Test Suite -Copernicus Sentinels Data Access Worldwide Benchmark Test Suite is the software suite used to run Test Scenarios for bechmarking various targets. +Copernicus Sentinels Data Access Worldwide Benchmark Test Suite is the software suite used to run Test Scenarios for bechmarking various Copernicus Data Provider targets. -The current supported target sites are +The current supported Target Sites are -* Conventional Data Access Hubs: +* Data Access Hubs using [DHuS Data Hub](https://sentineldatahub.github.io): * [Copernicus Open Access Hub (aka SciHub)](https://scihub.copernicus.eu/) * [Copernicus Open Access Hub API (aka APIHub)](https://scihub.copernicus.eu/twiki/do/view/SciHubWebPortal/APIHubDescription) * [Copernicus Collaborative Data Hub (aka ColHub)](https://colhub.copernicus.eu/) * [Copernicus Sentinels International Access Hub (aka IntHub)](https://inthub.copernicus.eu/) -* DIAS + * Any Data Access Hubs using [DHuS Data Hub](https://sentineldatahub.github.io) software + * [CODE-DE](https://code-de.org/) +* DIASes * [CREODIAS](https://creodias.eu/) * [Mundi Web Services](https://mundiwebservices.com/) * [ONDA](https://www.onda-dias.eu/) * [Sobloo](https://sobloo.eu/) +# Repository Content -## Getting started +This repository is a public repository with all the source code used for building the CDAB Test Suite -The CDAB Test Suite is built automatically providing different assets with each release: -- Source code archive -- Set of binaries for the clients as archives and RPM package -- A docker image available publicly as [esacdab/testsuite](https://hub.docker.com/repository/docker/esacdab/testsuite) +The CDAB Test Suite is built automatically providing a docker image available publicly at [esacdab/testsuite](https://hub.docker.com/repository/docker/esacdab/testsuite) that can be used as Test Site. -## Using the Docker image +# Getting Started -### General Command Line Access - - docker run -it esacdab/testsuite /bin/bash - -## Executing the Copernicus Data Access Benchmarking Tool and running Test Scenarios - -The detailed information for executing the test scenarios are described in [the CDAB Test Suite wiki](https://github.com/Terradue/cdab-testsuite/wiki) +You can start now using the Test Suite following the [Getting Started guide](https://github.com/Terradue/cdab-testsuite/wiki) diff --git a/src/cdab-client/config.sample.yaml b/src/cdab-client/config.sample.yaml index 2b8352c..318b328 100644 --- a/src/cdab-client/config.sample.yaml +++ b/src/cdab-client/config.sample.yaml @@ -1,8 +1,6 @@ global: docker_config: "docker-config.json" - # Reference target site for TS05 and TS06 - reference_target_site: ApiHub - country_shapefile_path: App_Data/TM_WORLD_BORDERS-0.3/TM_WORLD_BORDERS-0.3 + country_shapefile_path: /usr/lib/cdab-client/App_Data/TM_WORLD_BORDERS-0.3/TM_WORLD_BORDERS-0.3 # Test Mode (Download limited to 20MB) # test_mode: true # Service Providers definition @@ -1423,10 +1421,7 @@ service_providers: reference_target_site: S5PHub # ... and with the following parameters to filter out only baseline parameters: # All Copernicus products are baseline. Just remove the very recent one to avoid newly ingested bias - - key: sensingStop - full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}end" - value: "[NOW-1D]" - label: "before yesterday" + - # ESA-DATASET filter - key: processingCenter full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}processingCenter" From ed047593f162d109dca4d31cdcc28f6967bb018e Mon Sep 17 00:00:00 2001 From: Emmanuel Mathot Date: Wed, 11 Nov 2020 18:08:53 +0100 Subject: [PATCH 5/6] config sample for use case #1 --- .../config.sample.yaml | 502 ++++++++++++++++++ 1 file changed, 502 insertions(+) create mode 100644 Use Cases/Scenario 1 - NDVI Mapping/config.sample.yaml diff --git a/Use Cases/Scenario 1 - NDVI Mapping/config.sample.yaml b/Use Cases/Scenario 1 - NDVI Mapping/config.sample.yaml new file mode 100644 index 0000000..24282fc --- /dev/null +++ b/Use Cases/Scenario 1 - NDVI Mapping/config.sample.yaml @@ -0,0 +1,502 @@ +global: + docker_config: "docker-config.json" + country_shapefile_path: /usr/lib/cdab-client/App_Data/TM_WORLD_BORDERS-0.3/TM_WORLD_BORDERS-0.3 + # Test Mode (Download limited to 20MB) + # test_mode: true +# Service Providers definition +service_providers: + # Service Provider Name (used in the CLI target_name argument) + ApiHub: + # Maximum number of thread querying the target catalogue service in parallel + max_catalogue_thread: 5 + # Maximum number of thread downloading the target download service in parallel + max_download_thread: 2 + # Data Access + data: + # Entry Point (usually the URL to the catalogue) + url: https://scihub.copernicus.eu/apihub + # Credentials (usually username:password/apiKey) + credentials: username:password + catalogue: + # ONDA Catalogue collections' set definition + sets: + # ONDA has the Copernicus catalogue (see baselines definition) ... + Copernicus: + # Identifier of the cxatalogue set configured in the data section + reference_set_id: Copernicus + # ... as a baseline (used in TC5O1, TC502, TC601 & TC602)... + type: baseline # The catalogue collections' set of type baseline are referering to common baseline defined globally + # ... with ApiHub as a reference ... + reference_target_site: ApiHub + # ... and with the following parameters to filter out only baseline + ONDA: + max_catalogue_thread: 5 + max_download_thread: 2 + data: + url: https://catalogue.onda-dias.eu/dias-catalogue/ + credentials: username:password + class: ONDA + # Catalogue specific configuration + catalogue: + # ONDA Catalogue collections' set definition + sets: + # ONDA has the Copernicus catalogue (see baselines definition) ... + Copernicus: + # Identifier of the cxatalogue set configured in the data section + reference_set_id: Copernicus + # ... as a baseline (used in TC5O1, TC502, TC601 & TC602)... + type: baseline # The catalogue collections' set of type baseline are referering to common baseline defined globally + # ... with ApiHub as a reference ... + reference_target_site: ApiHub + # ... and with the following parameters to filter out only baseline + parameters: # All Copernicus products are baseline. Just remove the very recent one to avoid newly ingested bias + - key: sensingStop + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}end" + value: "[NOW-1D]" + label: "before yesterday" + # and the Sentinel-5P (see baselines definition) ... + Copernicus-S5P: + # ... as a baseline (used in TC5O1, TC502, TC601 & TC602)... + type: baseline # The catalogue collections' set of type baseline are referering to common baseline defined globally + # ... with Sentinel-5P pre-Ops Hub as a reference + reference_target_site: S5PHub + # ... and with the following parameters to filter out only baseline + parameters: # All Copernicus products are baseline. Just remove the very recent one to avoid newly ingested bias + - key: sensingStop + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}end" + value: "[NOW-1D]" + label: "before yesterday" + # Onda has a data offering... + DataOffer: + # ... defined as locally managed (TC503) + type: local + # ... with ApiHub as a reference + reference_target_site: ApiHub + # collections' set definition + collections: + Sentinel2-MSI1C: + label: "Sentinel-2 Level-1C" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI1C" + label: "Level-1C" + parameters: # Local data offer common filters. Just remove the very recent one to avoid newly ingested bias + - key: sensingStop + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}end" + value: "[NOW-1D]" + label: "before yesterday" + # Compute Services Access. For TS1X + compute: + # Authentication access point (obtain value from auth_url key in clouds.yaml). + auth_url: https://auth.cloud.ovh.net/v3 + # Cloud username (same username as for access to the OpenStack dashboard). + username: "login" + # Cloud password (password for user). + # password: "" + # Project ID (obtain value from project_id key in clouds.yaml). + # This setting is optional. + project_id: d6fae7e318114c248623562c0adce408 + # Project name (obtain value from project_name key in clouds.yaml). + project_name: "6611565740751529" + # User domain name (obtain value from user_domain_name key in clouds.yaml). + user_domain_name: "Default" + # Authentication region name (obtain value from region_name key in clouds.yaml). + # This setting is optional. + region_name: "GRA7" + # Interface (obtain value from interface key in clouds.yaml). + interface: "public" + # Identity API version (obtain value from identity_api_version key in clouds.yaml). + identity_api_version: 3 + # Preferred name of virtual machines to be created (sequential number is appended). + vm_name: "cdab-test-onda" + # Name of predefined public key for SSH connection to new virtual machine + # (key pairs can be created on the OpenStack dashboard and the private key can be downloaded, + # check under Compute > Key Pairs on the OpenStack dashboard). + key_name: "cdab-key" + # Name of image to be used for new virtual machine + # (choose from Compute > Images on the OpenStack dashboard). + image_name: "Centos 7" + # Name of flavour for new virtual machine + # (check under Compute > Instances > Launch Instance > Flavours/Flavors on the OpenStack dashboard). + flavor_name: "b2-7" + # Location of the private key file for SSH connections to virtual machine + # (must correspond to public key in key_name). + private_key_file: "cdab-key-onda.pem" + # User on virtual machine for SSH connections. + remote_user: "centos" + CREO: + max_catalogue_thread: 5 + max_download_thread: 5 + class: CREO + data: + url: https://finder.creodias.eu/resto/api/collections/describe.xml + credentials: username:password + catalogue: + # CREODias Catalogue collections' set definition + sets: + # CREODias has the Copernicus catalogue (see baselines definition) ... + Copernicus: + # Identifier of the cxatalogue set configured in the data section + reference_set_id: Copernicus + # ... as a baseline (used in TC5O1, TC502, TC601 & TC602)... + type: baseline # The catalogue collections' set of type baseline are referering to common baseline defined globally + # ... with ApiHub as a reference ... + reference_target_site: ApiHub + # ... and with the following parameters to filter out only baseline + parameters: # All Copernicus products are baseline. Just remove the very recent one to avoid newly ingested bias + - key: sensingStop + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}end" + value: "[NOW-1D]" + label: "before yesterday" + # ESA-DATASET filter + - key: dataset + full_name: "{http://creodias.eu/namespace/sentinel}dataset" + value: "ESA-DATASET" + label: "ESA Baseline" + # exact count for creodias + - key: exactCount + full_name: "{http://mapshup.info/-/resto/2.0/}exactCount" + value: "1" + label: "Exact count" + # and the Sentinel-5P (see baselines definition) ... + Copernicus-S5P: + # ... as a baseline (used in TC5O1, TC502, TC601 & TC602)... + type: baseline # The catalogue collections' set of type baseline are referering to common baseline defined globally + # ... with Sentinel-5P pre-Ops Hub as a reference + reference_target_site: S5PHub + # ... and with the following parameters to filter out only baseline + parameters: # All Copernicus products are baseline. Just remove the very recent one to avoid newly ingested bias + - key: sensingStop + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}end" + value: "[NOW-1D]" + label: "before yesterday" + # ESA-DATASET filter + - key: dataset + full_name: "{http://creodias.eu/namespace/sentinel}dataset" + value: "ESA-DATASET" + label: "ESA Baseline" + # exact count for creodias + - key: exactCount + full_name: "{http://mapshup.info/-/resto/2.0/}exactCount" + value: "1" + label: "Exact count" + # CREODias has a data offering... + DataOffer: + # ... defined as locally managed (TC503) + type: local + # ... with ApiHub as a reference + reference_target_site: ApiHub + # collections' set definition + parameters: + # Parameters to filter only locally managed products (for TC503) + # ESA-DATASET filter + - key: dataset + full_name: "{http://creodias.eu/namespace/sentinel}dataset" + value: "ESA-DATASET" + label: "ESA Baseline" + # exact count for creodias + - key: exactCount + full_name: "{http://mapshup.info/-/resto/2.0/}exactCount" + value: "1" + label: "Exact count" + collections: + Sentinel2-MSI1C: + label: "Sentinel-2 MSI Level-1C World" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI1C" + label: "S2MSI1C" + compute: + auth_url: https://cf2.cloudferro.com:5000/v3 + username: "login" + # password: "" + project_id: 7236455993264b0c8970d22751b23562 + project_name: "cloud_07455 project_without_eo" + user_domain_name: "cloud_07455" + region_name: "RegionOne" + interface: "public" + identity_api_version: 3 + vm_name: "cdab-test-creo" + key_name: "cdab-key" + image_name: "CentOS 7" + flavor_name: "eo2.large" + security_group: "allow_ping_ssh_rdp" + floating_ip: True + private_key_file: "cdab-key-creodias.pem" + remote_user: "eouser" + storage: + auth_url: https://cf2.cloudferro.com:5000/v3 + username: "login" + # password: "" + project_id: 7236455993264b0c8970d22751b23562 + project_name: "cloud_07455 project_without_eo" + user_domain_name: "cloud_07455" + region_name: "RegionOne" + storage_name: "teststorage" + test_file: "path/to/testfile" + MUNDI: + max_catalogue_thread: 5 + max_download_thread: 2 + data: + url: https://mundiwebservices.com/acdc/catalog/proxy/search + credentials: username:password + s3_secret_key: "secret" + s3_key_id: id + class: MUNDI + catalogue: + # Mundi Catalogue collections' set definition + sets: + # Mundi has the Copernicus catalogue (see baselines definition) ... + Copernicus: + # Identifier of the cxatalogue set configured in the data section + reference_set_id: Copernicus + # ... as a baseline (used in TC5O1, TC502, TC601 & TC602)... + type: baseline # The catalogue collections' set of type baseline are referering to common baseline defined globally + # ... with ApiHub as a reference ... + reference_target_site: ApiHub + # ... and with the following parameters to filter out only baseline + parameters: # All Copernicus products are baseline. Just remove the very recent one to avoid newly ingested bias + - key: sensingStop + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}end" + value: "[NOW-1D]" + label: "before yesterday" + # ESA-DATASET filter + - key: processingCenter + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}processingCenter" + value: "ESA" + label: "ESA Baseline" + # and the Sentinel-5P (see baselines definition) ... + Copernicus-S5P: + # ... as a baseline (used in TC5O1, TC502, TC601 & TC602)... + type: baseline # The catalogue collections' set of type baseline are referering to common baseline defined globally + # ... with Sentinel-5P pre-Ops Hub as a reference + reference_target_site: S5PHub + # ... and with the following parameters to filter out only baseline + parameters: # All Copernicus products are baseline. Just remove the very recent one to avoid newly ingested bias + - + # ESA-DATASET filter + - key: processingCenter + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}processingCenter" + value: "ESA" + label: "ESA Baseline" + # Mundi has a data offering... + DataOffer: + # ... defined as locally managed (TC503) + type: local + # ... with ApiHub as a reference + reference_target_site: ApiHub + parameters: + # ESA-DATASET filter + - key: processingCenter + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}processingCenter" + value: "ESA" + label: "ESA Baseline" + # collections' set definition + collections: + Sentinel2-MSI1C-World: + label: "Sentinel-2 Level-1C World" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI1C" + label: "Level-1C" + - key: sensingStart + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}start" + value: "[NOW-12M]" + label: "Last 12 months" + Sentinel2-MSI1C-Europe: + label: "Sentinel-2 Level-1C Europe" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI1C" + label: "Level-1C" + - key: geom + full_name: "{http://a9.com/-/opensearch/extensions/geo/1.0/}geometry" + value: "POLYGON((-10.547 36.173,-2.109 36.031,6.855 38.548,11.25 37.996,20.391 34.452,35.156 34.162,42.363 67.942,26.895 72.342,-26.016 67.136,-10.547 36.173))" + label: "Europe" + - key: sensingStart + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}start" + value: "[NOW-48M]" + label: "Last 48 months" + Sentinel2-MSI2A-Europe: + label: "Sentinel-2 Level-2A Europe" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI2A" + label: "Level-2A" + - key: geom + full_name: "{http://a9.com/-/opensearch/extensions/geo/1.0/}geometry" + value: "POLYGON((-10.547 36.173,-2.109 36.031,6.855 38.548,11.25 37.996,20.391 34.452,35.156 34.162,42.363 67.942,26.895 72.342,-26.016 67.136,-10.547 36.173))" + label: "Europe" + - key: sensingStart + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}start" + value: "[NOW-48M]" + label: "Last 48 months" + compute: + auth_url: https://iam.eu-de.otc.t-systems.com:443/v3 + username: "login" + # password: "" + project_name: "eu-de" + user_domain_name: "OTC-EU-DE-00000000001000041106" + interface: "public" + identity_api_version: 3 + vm_name: "cdab-test-mundi" + key_name: "cdab-key" + image_name: "Standard_CentOS_7_latest" + flavor_name: "s2.large.4" + security_group: "default" + floating_ip: True + private_key_file: "cdab-key-mundi.pem" + remote_user: "linux" + storage: + auth_url: https://iam.eu-de.otc.t-systems.com:443/v3 + username: "login" + # password: "" + project_name: "eu-de" + user_domain_name: "OTC-EU-DE-00000000001000041106" + SOBLOO: + max_catalogue_thread: 5 + max_download_thread: 2 + data: + url: https://sobloo.eu/api/v1/services/search + credentials: username:apikey + class: SOBLOO + catalogue: + # Sobloo Catalogue collections' set definition + sets: + # Sobloo has the Copernicus catalogue (see baselines definition) ... + Copernicus: + # Identifier of the cxatalogue set configured in the data section + reference_set_id: Copernicus + # ... as a baseline (used in TC5O1, TC502, TC601 & TC602)... + type: baseline # The catalogue collections' set of type baseline are referering to common baseline defined globally + # ... with ApiHub as a reference ... + reference_target_site: ApiHub + # ... and with the following parameters to filter out only baseline + parameters: # All Copernicus products are baseline. Just remove the very recent one to avoid newly ingested bias + - key: sensingStop + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}end" + value: "[NOW-1D]" + label: "before yesterday" + # and the Sentinel-5P (see baselines definition) ... + Copernicus-S5P: + # ... as a baseline (used in TC5O1, TC502, TC601 & TC602)... + type: baseline # The catalogue collections' set of type baseline are referering to common baseline defined globally + # ... with Sentinel-5P pre-Ops Hub as a reference + reference_target_site: S5PHub + # ... and with the following parameters to filter out only baseline + parameters: # All Copernicus products are baseline. Just remove the very recent one to avoid newly ingested bias + - key: sensingStop + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}end" + value: "[NOW-1D]" + label: "before yesterday" + # Sobloo has a data offering... + DataOffer: + # ... defined as locally managed (TC503) + type: local + # ... with ApiHub as a reference + reference_target_site: ApiHub + # collections' set definition + collections: + Sentinel2-L1-Europe-Africa: + label: "Sentinel-2 MSI L1C over Europe and Africa" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI1C" + label: "S2MSI1C" + - key: geometry + full_name: "{http://a9.com/-/opensearch/extensions/geo/1.0/}geometry" + value: "POLYGON((-24.961 16.973,-21.094 4.215,4.57 0.352,13.711 -34.016,21.445 -36.031,55.195 -24.847,54.141 16.973,41.484 11.178,34.453 31.653,41.484 68.528,26.367 72.396,-26.719 67.475,-11.25 36.315,-24.961 16.973))" + label: "over Europe & Africa" + Sentinel2-L1-World: + label: "Sentinel-2 MSI L1C World last 12 months" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI1C" + label: "S2MSI1C" + - key: sensingStart + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}start" + value: "[NOW-12M]" + label: "last 12 months" + Sentinel2-L2-Europe-Africa: + label: "Sentinel-2 MSI L2A over Europe and Africa" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI2A" + label: "S2MSI2A" + - key: geometry + full_name: "{http://a9.com/-/opensearch/extensions/geo/1.0/}geometry" + value: "POLYGON((-24.961 16.973,-21.094 4.215,4.57 0.352,13.711 -34.016,21.445 -36.031,55.195 -24.847,54.141 16.973,41.484 11.178,34.453 31.653,41.484 68.528,26.367 72.396,-26.719 67.475,-11.25 36.315,-24.961 16.973))" + label: "over Europe & Africa" + Sentinel2-L1-World: + label: "Sentinel-2 MSI L2A World last 12 months" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI2A" + label: "S2MSI2A" + - key: sensingStart + full_name: "{http://a9.com/-/opensearch/extensions/time/1.0/}start" + value: "[NOW-12M]" + label: "last 12 months" +data: + # This section defines all the logical sets of data + sets: + # Copernicus set (without S5P not yet considered as operational) + Copernicus: + name: "Copernicus Product Types" + collections: + Sentinel2-MSI1C: + label: "Sentinel-2 MSI Level-1C" + parameters: + - key: missionName + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}platform" + value: "Sentinel-2" + label: "Sentinel-2" + - key: productType + full_name: "{http://a9.com/-/opensearch/extensions/eo/1.0/}productType" + value: "S2MSI1C" + label: "S2MSI1C" \ No newline at end of file From 2589b37bf08d7db6999ed535022219d98d30c0f3 Mon Sep 17 00:00:00 2001 From: Emmanuel Mathot Date: Thu, 12 Nov 2020 14:21:32 +0100 Subject: [PATCH 6/6] bump to 84 --- Jenkinsfile | 2 +- build.yml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Jenkinsfile b/Jenkinsfile index 380c2eb..3a080af 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -100,8 +100,8 @@ pipeline { def cdabclientrpm = findFiles(glob: "src/docker/cdab-client-*.rpm") def cdabremoteclientrpm = findFiles(glob: "src/docker/cdab-remote-client-*.rpm") def descriptor = readDescriptor() - def testsuite = docker.build(descriptor.docker_image_name, "--build-arg CDAB_RELEASE=${descriptor.version} --build-arg CDAB_CLIENT_RPM=${cdabclientrpm[0].name} --build-arg CDAB_REMOTE_CLIENT_RPM=${cdabremoteclientrpm[0].name} ./src/docker") def mType=getTypeOfVersion(env.BRANCH_NAME) + def testsuite = docker.build(descriptor.docker_image_name, "--build-arg CDAB_RELEASE=${mType}${descriptor.version} --build-arg CDAB_CLIENT_RPM=${cdabclientrpm[0].name} --build-arg CDAB_REMOTE_CLIENT_RPM=${cdabremoteclientrpm[0].name} ./src/docker") docker.withRegistry('https://registry.hub.docker.com', 'dockerhub-emmanuelmathot') { testsuite.push("${mType}${descriptor.version}") testsuite.push("${mType}latest") diff --git a/build.yml b/build.yml index 8001c7c..8c10695 100644 --- a/build.yml +++ b/build.yml @@ -1,2 +1,2 @@ docker_image_name: esacdab/testsuite -version: 81 +version: 84