diff --git a/Makefile b/Makefile index 0bf6a34f..7fc8a5fd 100644 --- a/Makefile +++ b/Makefile @@ -35,14 +35,14 @@ deploy-hub_monitoring-azure: ssh-add deployment/grafvio_id_rsa # chmod 400 deployment/grafvio_id_rsa ansible-playbook deployment/ansible/update_grafana_dashboard.yml -i deployment/ansible/inventory --extra-vars ansible_port=22000 -.PHONY: services-up ## 🐳 Start all services (mongodb, model_serving, supervisor, ui) -services-up: +.PHONY: vio-edge-up ## 🐳 Start all services (mongodb, model_serving, supervisor, ui) +vio-edge-up: docker-compose up -d --build -.PHONY: services-up-raspberrypi ## 🐳 Start all services on RaspberryPI (mongodb, model_serving, supervisor, ui) -services-up-raspberrypi: +.PHONY: vio-edge-up-raspberrypi ## 🐳 Start all services on RaspberryPI (mongodb, model_serving, supervisor, ui) +vio-edge-up-raspberrypi: docker-compose -f docker-compose.raspberrypi.yml up -d -.PHONY: services-down ## ❌ Stop all services (model_serving, supervisor, ui) -services-down: +.PHONY: vio-edge-down ## ❌ Stop all services (model_serving, supervisor, ui) +vio-edge-down: docker-compose down diff --git a/README.md b/README.md index c6398451..59376122 100644 --- a/README.md +++ b/README.md @@ -6,17 +6,25 @@ Visual Inspection Orchestrator is a modular framework made to ease the deployment of VI usecases. -Usecase example: Quality check of a product manufactured on an assembly line. +*Usecase example: Quality check of a product manufactured on an assembly line.* VIO full documentation can be found [here](https://octo-technology.github.io/VIO/) -## Features -- [The edge orchestrator](docs/supervisor.md) +The VIO modules are split between: + +** Edge modules **: The VIO edge modules are deployed close to the object to inspect + +- [The edge orchestrator](docs/edge_orchestrator.md) - [The edge interface](docs/edge_interface.md) -- [The edge model serving](docs/model_serving.md) -- [The hub monitoring](docs/monitoring.md) -- [The deployment tools](docs/deployment.md) +- [The edge model serving](docs/edge_model_serving.md) +- [The edge deployment playbook](docs/edge_deployment.md) + +** Hub modules **: The VIO hub modules are deployed in the cloud to collect data and orchestrate the edge fleet + +- [The hub monitoring](docs/hub_monitoring.md) +- [The hub deployment playbook](docs/hub_deployment.md) + ## Install the framework @@ -30,9 +38,9 @@ Prerequisites: To launch the stack you can use the [Makefile](../Makefile) on the root of the repository which define the different target based on the [docker-compose.yml](../docker-compose.yml): -- run all services (supervisor, model-serving, Mongo DB, UI) : `make services-up` +- run all edge services (orchestrator, model-serving, interface, db) with local hub monitoring (grafana): `make vio-edge-up` -- stop and delete all running services : `make services-down` +- stop and delete all running services : `make vio-edge-down` To check all services are up and running you can run the command `docker ps`, you should see something like below: @@ -40,15 +48,21 @@ To check all services are up and running you can run the command `docker ps`, yo Once all services are up and running you can access: -- the swagger of the core API (OrchestratoAPI): [http://localhost:8000/docs](http://localhost:8000/docs) -- the swagger of the model serving: [http://localhost:8501/docs](http://localhost:8501/docs) -- the monitoring grafana: [http://localhost:4000/login](http://localhost:4000/login) +- the swagger of the edge orchestrator API (OrchestratoAPI): [http://localhost:8000/docs](http://localhost:8000/docs) +- the swagger of the edge model serving: [http://localhost:8501/docs](http://localhost:8501/docs) +- the hub monitoring: [http://localhost:4000/login](http://localhost:4000/login) - the edge interface: [http://localhost:8080](http://localhost:8080) From the edge interface you can load a configuration and run the trigger button that will trigger the Core API and launch the following actions: ![vio-architecture-stack](docs/images/supervisor-actions.png) +# Releases + +Build Type | Status | Artifacts +----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- +**Docker images** | [![Status](https://github.com/octo-technology/VIO/actions/workflows/publication_vio_images.yml/badge.svg)](https://github.com/octo-technology/VIO/actions/workflows/publication_vio_images.yml/badge.svg) | [Github registry](https://github.com/orgs/octo-technology/packages) + ## License VIO is licensed under [Apache 2.0 License](docs/LICENSE.md) diff --git a/docs/CICD.md b/docs/CICD.md index ef694f55..51854071 100644 --- a/docs/CICD.md +++ b/docs/CICD.md @@ -1,61 +1,34 @@ -# La CICD +# The CICD -Nous utilisons les workflows de Github Actions pour l'intĂ©gration continue et pour le dĂ©ploiement continu. +We use Github Actions workflows for continuous integration and continuous deployment. -Il existe 5 workflows : +## The continuous integration workflows -2 workflows de CI: +- [ci_edge_interface.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/ci_edge_interface.yml): the CI of the edge_interface application is decomposed into 2 jobs + + job lint_and_test_on_edge_interface: static code analysis of JavaScript code (no tests at the moment) + job build_and_push_images: building the Docker image of the application without publishing to a registry -- [ci_edge_interface.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/ci_edge_interface.yml) : - intĂ©gration continue de l'application - edge_interface dĂ©composĂ©e en 2 jobs - - job `lint_and_test_on_edge_interface` : analyse statique du code JavaScript (pas de tests pour le moment) - - job `build_and_push_images` : construction de l'image Docker de l'application sans publication dans une registry -- [ci_edge_orchestrator.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/ci_edge_orchestrator.yml) : - intĂ©gration continue de l'application - edge_orchestrator dĂ©composĂ©e en 2 jobs - - job `lint_and_test_on_edge_orchestrator` : analyse statique du code Python avec Flake8 suivie de l'exĂ©cution des - tests automatisĂ©s (unitaires, intĂ©gration et fonctionnels) avec le stockage des rapports de tests dans Github - - job `build_and_push_images` : construction de l'image Docker de l'application sans publication dans une registry +- [ci_edge_orchestrator.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/ci_edge_orchestrator.yml): the CI of the edge_orchestrator application is decomposed into 2 jobs + + job lint_and_test_on_edge_orchestrator: static code analysis with Flake8 followed by automated tests (unit, integration, and functional) with storing test reports in Github + job build_and_push_images: building the Docker image of the application without publishing to a registry -Les deux workflows de CI (edge_[interface|orchestrator]_ci.yml) sont dĂ©clenchĂ©s sous l'une des conditions suivantes : +The CI workflows (edge_[interface|orchestrator]_ci.yml) are triggered under one of the following conditions: +- if a merge request with differences is opened on Github +- if a commit on the master branch is pushed to Github -- si une merge request comportant des diffĂ©rences est ouverte sur Github -- si un commit sur la branche master est pushĂ© sur Github +## The release workflows -3 worklows de release: +- [publication_vio_images.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/publication_vio_images.yml): publication of Docker edge_serving images by manual trigger job + + build_and_push_images: building Docker images with publishing images to the Github registry -- [publication_vio_images.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/publication_vio_images.yml) : - publication des images Docker edge_serving par dĂ©clenchement manuel - - job `build_and_push_images` : construction des images Docker avec publication des images dans la registry Github -- [publication_vio_images_raspberry.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/publication_vio_images_raspberry.yml) : - publication des images Docker edge_serving par dĂ©clenchement manuel - - job `build_and_push_images` : construction des images Docker spĂ©cifique au hardware Raspberry avec publication des images dans la registry Github -- [publication_pages_gh-pages_branch.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/publication_pages_gh-pages_branch.yml) : - gĂ©nĂ©ration et dĂ©ploiment de la documentation +- [publication_vio_images_raspberry.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/publication_vio_images_raspberry.yml): publication of Docker edge_serving images by manual trigger job -Les 3 workflows de release sont dĂ©clenchĂ©s sous l'une des conditions suivantes : + build_and_push_images: building Docker images specific to Raspberry hardware with publishing images to the Github registry -- si une release est crĂ©e depuis Github +- [publication_pages_gh-pages_branch.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/publication_pages_gh-pages_branch.yml): generation and deployment of documentation - -////////////////////////////// WIP ////////////////////////////// - -Pour dĂ©ployer une nouvelle version sur RaspberryPI, il faut d'abord crĂ©er des images Docker spĂ©cifiques pour le device -en question. -Afin de crĂ©er ces images, il suffit d'ajouter un tag Git, en suivant la -convention [SemVer](https://semver.org/lang/fr/). Par exemple: - -``` -git tag rpi-1.2.1 -git push --tags -``` - -Une fois le tag poussĂ©, cela dĂ©clenche une pipeline Gitlab CI qui va construire les images Docker pour RaspberryPI. -Celles-ci seront stockĂ©es dans la registry Gitlab, et elles-mĂȘmes taguĂ©es avec le mĂȘme tag `rpi-1.2.1`. - -Enfin, il faut prĂ©ciser Ă  Azure IoT Hub qu'on souhaite dĂ©ployer ces nouvelles versions sur les dispositifs Edge. -Pour cela, il suffit mettre Ă  jour les variables dans le fichier `deployment/ansible/setup_iot_hub_azure.yml` et -relancer le playbook Ansible. - -////////////////////////////// WIP ////////////////////////////// +The release workflows are triggered under one of the following conditions: +- if a release is created from Github diff --git a/docs/CONTRIBUTING.md b/docs/CONTRIBUTING.md index 1aed22d0..774e4d81 100644 --- a/docs/CONTRIBUTING.md +++ b/docs/CONTRIBUTING.md @@ -1,12 +1,12 @@ # Contributing -**Contribution rules** +## Contribution rules - The code must be exhaustively tested. - Python test package: `pytest` (with possibility to use unittest mocks) - Code style: PEP8 - Programming language for code and comments: English -**Coding conventions** +## Coding conventions - 120 character max per line - Use python 3.6 `fstring` instead of `format()` or `%s` - Directories, filenames, function and method names in `snake_case` @@ -20,7 +20,7 @@ corresponding class). existing dead code. - Use [pathlib](https://docs.python.org/3/library/pathlib.html#module-pathlib) instead of native Python [os.path](https://docs.python.org/3/library/os.path.html) -**Exception conventions** +## Exception conventions - Create a custom exception in the module `exception.py` as follow: ```python class MyCustomException(Exception): @@ -40,7 +40,7 @@ except MyCustomException as e: return True ``` -**Logging conventions** +## Logging conventions - The logger should always be used at the class level and not the module level. - The logger should always be created through the _getChild_ method as a class attribute. - The created child logger should always be used in the class. @@ -60,7 +60,7 @@ class MyClassWithLogging: self.logger.info('Doing something!') ``` -**Test conventions** +## Test conventions - The same file hierarchy should be used between a project and the associated tests. ``` my_project @@ -123,7 +123,7 @@ class TestMyFunction: - Don't mistake a stub for a mock. A mock is used to assert that it has been called (see above example). A stub is used to simulate the returned value. -**Versioning strategy** +## Versioning strategy - Git tutorial: - [Basic git tutorial](http://rogerdudler.github.io/git-guide/) - [Learn git branching](https://learngitbranching.js.org/) diff --git a/docs/DOCUMENTATION.md b/docs/DOCUMENTATION.md new file mode 100644 index 00000000..c3b033da --- /dev/null +++ b/docs/DOCUMENTATION.md @@ -0,0 +1,23 @@ +# Documentation + +To update the documentation, feel free to modify / add markdown file in the `/docs` folder of the repository + +## Preview Locally + +To build locally your github pages site +```shell +$ mkdocs build +``` +To test locally your github pages site +```shell +$ mkdocs serve +``` + +## Publish on github pages + +Simply commit your modification on your branch, issue a PR and the workflow [publication_pages_gh-pages_branch.yml](https://github.com/octo-technology/VIO/tree/main/.github/workflows/publication_pages_gh-pages_branch.yml) will be triggered automatically. + +Note: (Wrong behaviour) to manually push your modification directly to github pages you can execute the command: +```shell +$ mkdocs gh-deploy +``` diff --git a/docs/documentation.md b/docs/documentation.md deleted file mode 100644 index 52a969ac..00000000 --- a/docs/documentation.md +++ /dev/null @@ -1,16 +0,0 @@ -## Documentation - -to update the documentation - -To build locally your github pages site -```shell -$ mkdocs build -``` -To test locally your github pages site -```shell -$ mkdocs serve -``` -to push the github pages updates to the dedicated branch gh-deploy -```shell -$ mkdocs gh-deploy -``` diff --git a/docs/deployment.md b/docs/edge_deployment.md similarity index 61% rename from docs/deployment.md rename to docs/edge_deployment.md index 7a0e40ed..932d20c1 100644 --- a/docs/deployment.md +++ b/docs/edge_deployment.md @@ -1,110 +1,11 @@ -# Deployment +# Edge Deployment -## Cloud - Infrastructure deployment on Azure - -This section allows you to create all the Azure infrastructure for VIO: -- Storage resources (Storage Account + PostgreSQL) -- The IoT Hub -- An Azure function (`telemetry_saver`) to save Device-to-Cloud telemetry data in PostgreSQL -- An Event Grid Topic to connect IoT Hub with the `telemetry_saver` Azure function - -### Prerequisites - -Before getting started, you need to install Ansible and its dependencies for Azure and PostgreSQL. - -```shell -$ cd ./deployment/ -$ conda create -n ansible python=3 -$ conda activate ansible -$ pip install -r requirements.txt -$ ansible-galaxy collection install azure.azcollection -$ ansible-galaxy collection install community.grafana -``` - - -You'll also need : -- The [Azure CLI](https://docs.microsoft.com/fr-fr/cli/azure/install-azure-cli) -- The [Azure CLI IoT extension](https://github.com/Azure/azure-iot-cli-extension) extension -- The [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools) - -On MacOS, these can be installed as follows: - -```shell -$ brew update -$ brew install azure-cli -$ az extension add --name azure-iot -$ brew tap azure/functions -$ brew install azure-functions-core-tools@3 -``` - -Once you have installed `azure-cli`, you can login to Azure using your Accenture account: - -```shell -$ az login -``` - -Make sure you are using the Azure subscription `IX-Visual-Inspection-MDI`. You can check that with: -```shell -$ az account list --output table -``` - -If `IX-Visual-Inspection-MDI` is not the default subscription, you can switch to it with the following command: -```shell -$ az account set --subscription "IX-Visual-Inspection-MDI" -``` - -### Define the mandatory environment variables - -In order to create and configure all the Azure infrastructure, we need to define some environment variables: - -```shell -$ export REGISTRY_USERNAME= -$ export REGISTRY_PASSWORD= -$ export POSTGRES_USERNAME= -$ export POSTGRES_PASSWORD= -$ export AZURE_STORAGE_CONNECTION_STRING= -``` - -#### Registry Username -firstname.lastname (i.e nicolas.dupont) - -#### Registry Password -You can find it on gitlab, click on your profil picture (top right corner) --> preferences. -On the ```Access Tokens``` category, you can generate a token. I suggest no expiration date, and you select all the scopes. -Keep this token safe, once it's generated you cannot retrieve it on gitlab anymore. - -#### Postgres Username and Password. -- To Get the Postgres username, go the Azure Portal, our subscription ```IX-Visual-Inspection-MDI``` --> our resource group ```vio-rg-dev``` --> the ``` vio-function-app-dev ``` function app. -On the left side bar menu, click on ```Configuration``` and unhide the POSTGRES_USER field. You only need what's before the @. Here it's ```vioadmin``` -- To Ge the Postgres password, it's on the same page but unhide the POSTGRES_PASSWORD field. - -![postgres_username_password.png](images/postgres_username_password.png) - - -### Create Azure Infrastructure - -The following command creates all the Azure IoT infrastructure for VIO. - -```shell -$ ansible-playbook ansible/create_azure_cloud_infrastructure.yml -e 'ansible_python_interpreter=' -``` - -## Cloud - Deploy Grafana dashboard and data-sources - -To deploy Grafana [dashboard](../monitoring/dashboards) and [data-sources](../monitoring/provisioning), run the following playbook : -```shell -$ ansible-playbook -i ansible/inventory/production.ini ansible/update_grafana_dashboard.yml --ask-pass -``` - -This will copy the files on the Grafana resource deployed in Azure and relaunch the grafana service to take into account the brand uploaded files. - - -## Edge - Raspberry Setup (Raspbian installation) +## Raspberry Setup (Raspbian installation) The Raspberry can be set up thanks to this [Makefile](Makefile). First thing first, insert the SD card in your computer to mount it. **Before typing any command**, check that the SD card is effectively mounted on `/dev/disk2`, by typing: ```shell -$ diskutil list +$ diskutil list ``` Checklist before continuing: @@ -234,9 +135,9 @@ $ diskutil unmountDisk $(MOUNTING_DIR) $ diskutil eject $(MOUNTING_DIR) ``` -## Edge - Install and configure the IoT Edge Agent on RaspberryPI +## Install and configure the IoT Edge Agent on RaspberryPI -In order to be managed by Azure IoT Hub, each edge device must install an IoT Edge Agent and _connect_ to the Hub. +In order to be managed by Azure IoT Hub, each edge device must install an IoT Edge Agent and _connect_ to the Hub. We use Ansible to automate the setup of the IoT Edge Agent. diff --git a/docs/edge_interface.md b/docs/edge_interface.md index c3d729ab..b0c4a713 100644 --- a/docs/edge_interface.md +++ b/docs/edge_interface.md @@ -1,4 +1,4 @@ -# The Edge interface +# Edge Interface 1. Select a configuration ![edge_interface_config_screen](images/edge_interface_config_screen.png) diff --git a/docs/model_serving.md b/docs/edge_model_serving.md similarity index 90% rename from docs/model_serving.md rename to docs/edge_model_serving.md index c8df31a0..cb5c45f7 100644 --- a/docs/model_serving.md +++ b/docs/edge_model_serving.md @@ -1,4 +1,4 @@ -# The edge model serving +# Edge Model Serving more documentation coming soon.. diff --git a/docs/supervisor.md b/docs/edge_orchestrator.md similarity index 60% rename from docs/supervisor.md rename to docs/edge_orchestrator.md index 472860d3..ae63882e 100644 --- a/docs/supervisor.md +++ b/docs/edge_orchestrator.md @@ -1,83 +1,76 @@ -# The edge orchestrator +# Edge Orchestrator -Le supervisor orchestre les Ă©tapes suivantes dĂšs qu'il est dĂ©clenchĂ© : +The supervisor orchestrates the following steps as soon as it is triggered: -1. capture d'images -2. sauvegarde des images -3. sauvegarde des metadata -4. faire l'infĂ©rence des modĂšles sur les images -6. sauvegarde des rĂ©sultats +1. image capture +2. image backup +3. metadata backup +4. model inference on images +5. saving results -## DĂ©veloppement + ![vio-architecture-stack](images/supervisor-actions.png) -Pour faciliter l'installation de l'environnement de dĂ©veloppment, un [Makefile](https://github.com/octo-technology/VIO/blob/main/supervisor/Makefile) automatise les tĂąches: -```shell -$ make -❓ Use `make ' -conda_env 🐍 Create a Python conda environment -dependencies ⏬ Install development dependencies -tests ✅ Launch all the tests -unit_tests ✅ Launch the unit tests -integration_tests ✅ Launch the integration tests -functional_tests ✅ Launch the functional tests -pyramid âšș Compute the tests pyramid -pyramid_and_badges 📛 Generate Gitlab badges -``` +## Set up your development environment +To facilitate the installation of the development environment, a [Makefile](https://github.com/octo-technology/VIO/blob/main/supervisor/Makefile) automates tasks: -### Installation de l'interprĂ©teur Python + $ make + ❓ Use `make ' + conda_env 🐍 Create a Python conda environment + dependencies ⏬ Install development dependencies + tests ✅ Launch all the tests + unit_tests ✅ Launch the unit tests + integration_tests ✅ Launch the integration tests + functional_tests ✅ Launch the functional tests + pyramid âšș Compute the tests pyramid + pyramid_and_badges 📛 Generate Gitlab badges -Le projet utilise `conda` pour gĂ©rer les environnements virtuels Python [Guide d'installation Miniconda](https://docs.conda.io/en/latest/miniconda.html). +** Python interpreter installation ** -#### MacOS +The project uses `conda` to manage Python virtual environments [Miniconda installation guide](https://docs.conda.io/en/latest/miniconda.html). -La façon la plus directe pour installer `conda` reste Homebrew : -```shell -$ brew update -$ brew install --cask miniconda -``` +** Install conda on MacOS ** -#### Initialiser l'environnement projet +The most direct way to install `conda` is still Homebrew: -Une fois Miniconda installĂ©, crĂ©er l'environnement virtuel Python et installer ses dĂ©pendences via le Makefile : -```shell -$ cd supervisor -$ make conda_env -``` + brew update + brew install --cask miniconda -#### Installation des dĂ©pendances projet -```shell -$ make dependencies -``` +** Initialize the project environment ** -### Setuptools "editable mode" +Once Miniconda is installed, create the Python virtual environment and install its dependencies using the Makefile: -Pour pouvoir bĂ©nĂ©ficier du packaging Python sans ĂȘtre impactĂ© lors du dĂ©veloppement en local (ie. sans devoir reconstruire un package Ă  chaque modification), nous utilisons le mode `editable` (cf la [doc](https://pip.pypa.io/en/stable/cli/pip_install/#install-editable) officielle de pip). + cd supervisor + make conda_env -```shell -$ pip install -e . -``` +** Install project dependencies ** -Lors de l'installation de l'environnement de dĂ©veloppement la commande ci-dessus va produire l'effet suivant: -- Un fichier [supervisor.egg-link](/usr/local/Caskroom/miniconda/base/envs/supervisor/lib/python3.9/site-packages/supervisor.egg-link) a Ă©tĂ© crĂ©Ă© dans l'environnement virtuel supervisor avec le contenu suivante : + make dependencies -```shell -$ cat /usr/local/Caskroom/miniconda/base/envs/supervisor/lib/python3.9/site-packages/supervisor.egg-link -/path/to/project/sources/vio_edge/supervisor -``` +** Setuptools "editable mode" ** -Ainsi grĂące au `egg-link`, le module python `supervisor` est bien installĂ© comme librairie dans l'environnement virtuel, mais permet de ne pas avoir Ă  repackager rĂ©guliĂšrement aprĂšs une mise Ă  jour en local. +To be able to benefit from Python packaging without being impacted during local development (i.e. without having to rebuild a package each time it is updated), we use the editable mode (see the official pip [doc](https://pip.pypa.io/en/stable/cli/pip_install/#install-editable)). -### Setuptools "development mode" + pip install -e . -Pour pouvoir installer la librairie et ses dĂ©pendences de dĂ©veloppement (librairies de tests): -```shell -$ pip install -e ".[dev]" -``` +During the installation of the development environment, the above command will have the following effect: + +A file supervisor.egg-link was created in the supervisor virtual environment with the following content: -### Setuptools "console_scripts" EntryPoints + cat /usr/local/Caskroom/miniconda/base/envs/supervisor/lib/python3.9/site-packages/supervisor.egg-link + /path/to/project/sources/vio_edge/supervisor -Dans le fichier [setup.py](https://github.com/octo-technology/VIO/blob/main/supervisor/setup.py) du supervisor, le bloc `entry_points` suivant est configurĂ© : +Thus, thanks to the egg-link, the python module supervisor is properly installed as a library in the virtual environment, but does not require regular repackaging after an update in local. + +** Setuptools "development mode" ** + +To be able to install the library and its development dependencies (test libraries): + + pip install -e ".[dev]" + +** Setuptools "console_scripts" EntryPoints ** + +In the [supervisor.egg-link](/usr/local/Caskroom/miniconda/base/envs/supervisor/lib/python3.9/site-packages/supervisor.egg-link) file of the supervisor, the following entry_points block is configured ```python setup( @@ -91,18 +84,18 @@ setup( ) ``` -L'outil de packaging `setuptools` permet de configurer diffĂ©rents types de scripts, notamment les `console_scripts` qui vont permettre la gĂ©nĂ©ration d'un script shell "shim", qui sera positionnĂ© sur le PATH, et se chargera d'appeler la fonction `supervisor.__main__:main` tel que configurĂ© +The `setuptools` package allows you to configure different types of scripts, including console_scripts, which will generate a "shim" shell script that will be placed on the PATH and will call the supervisor.__main__:main function as configured. -Ce script [supervisor](/usr/local/Caskroom/miniconda/base/envs/supervisor/bin/supervisor) se situe dans l'environnement virtuel crĂ©Ă© lors de l'installation du projet. +This supervisor [supervisor](/usr/local/Caskroom/miniconda/base/envs/supervisor/bin/supervisor) is located in the virtual environment created during project installation. -Lorsque l'on active l'environnement virtuel (en faisant `conda activate supervisor`), la variable d'environnement `$PATH` est configurĂ©e pour pointer vers le dossier `bin/` de l'environnement virtuel. +When the virtual environment is activated (by running `conda` activate supervisor), the $PATH environment variable is configured to point to the bin/ folder of the virtual environment. ```shell $ echo $PATH /usr/local/Caskroom/miniconda/base/envs/supervisor/bin:[...] ``` -Si on regarde Ă  l'intĂ©rieur de ce script, on remarque qu'il se charge d'importer notre module `supervisor` et d'en appeler l'entry point. +If we look inside this script, we notice that it is responsible for importing our supervisor module and calling its entry point. ```shell #!/usr/local/Caskroom/miniconda/base/envs/supervisor/bin/python3.9 @@ -121,23 +114,24 @@ if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0]) sys.exit(load_entry_point('supervisor', 'console_scripts', 'supervisor')()) ``` - -Pour plus d'information, la documentation se trouve [ici](https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html). +For more information, the documentation can be found [here](https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html). ## Tests -Pour exĂ©cuter tous les tests: -```shell -$ make tests -``` -Pour exĂ©cuter seulement les tests unitaires: -```shell -$ make unit_tests -``` +To run all tests: + + make tests + +To run only unit tests: + + make unit_tests + +## API Routes -## Routes API +All routes are prefixed with api/v1. For example, to retrieve the list of items locally, use this url: +[http://localhost:8000/api/v1/items](http://localhost:8000/api/v1/items) -Toutes les routes sont prĂ©fixĂ©es par `api/v1`. Par exemple pour rĂ©cupĂ©rer la liste des items en local, il faut utiliser cette url: `http://localhost:8000/api/v1/items` +You can also refer to the API swagger on the /docs url: [http://localhost:8000/docs](http://localhost:8000/docs) ## Add a new configuration diff --git a/docs/hub_deployment.md b/docs/hub_deployment.md new file mode 100644 index 00000000..89d8bfb3 --- /dev/null +++ b/docs/hub_deployment.md @@ -0,0 +1,99 @@ +# Hub Deployment + +The VIO hub modules can be deployed in any cloud, for this tutorial we decided to use Azure and its IoT solution Azure IoT Edge/Hub + +This section allows you to create all the Azure infrastructure for VIO: +- Storage resources (Storage Account + PostgreSQL) +- The IoT Hub +- An Azure function (`telemetry_saver`) to save Device-to-Cloud telemetry data in PostgreSQL +- An Event Grid Topic to connect IoT Hub with the `telemetry_saver` Azure function + +### Prerequisites + +Before getting started, you need to install Ansible and its dependencies for Azure and PostgreSQL. + +```shell +$ cd ./deployment/ +$ conda create -n ansible python=3 +$ conda activate ansible +$ pip install -r requirements.txt +$ ansible-galaxy collection install azure.azcollection +$ ansible-galaxy collection install community.grafana +``` + + +You'll also need : +- The [Azure CLI](https://docs.microsoft.com/fr-fr/cli/azure/install-azure-cli) +- The [Azure CLI IoT extension](https://github.com/Azure/azure-iot-cli-extension) extension +- The [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools) + +On MacOS, these can be installed as follows: + +```shell +$ brew update +$ brew install azure-cli +$ az extension add --name azure-iot +$ brew tap azure/functions +$ brew install azure-functions-core-tools@3 +``` + +Once you have installed `azure-cli`, you can login to Azure using your Accenture account: + +```shell +$ az login +``` + +Make sure you are using the Azure subscription `IX-Visual-Inspection-MDI`. You can check that with: +```shell +$ az account list --output table +``` + +If `IX-Visual-Inspection-MDI` is not the default subscription, you can switch to it with the following command: +```shell +$ az account set --subscription "IX-Visual-Inspection-MDI" +``` + +### Define the mandatory environment variables + +In order to create and configure all the Azure infrastructure, we need to define some environment variables: + +```shell +$ export REGISTRY_USERNAME= +$ export REGISTRY_PASSWORD= +$ export POSTGRES_USERNAME= +$ export POSTGRES_PASSWORD= +$ export AZURE_STORAGE_CONNECTION_STRING= +``` + +#### Registry Username +firstname.lastname (i.e nicolas.dupont) + +#### Registry Password +You can find it on gitlab, click on your profil picture (top right corner) --> preferences. +On the ```Access Tokens``` category, you can generate a token. I suggest no expiration date, and you select all the scopes. +Keep this token safe, once it's generated you cannot retrieve it on gitlab anymore. + +#### Postgres Username and Password. +- To Get the Postgres username, go the Azure Portal, our subscription ```IX-Visual-Inspection-MDI``` --> our resource group ```vio-rg-dev``` --> the ``` vio-function-app-dev ``` function app. +On the left side bar menu, click on ```Configuration``` and unhide the POSTGRES_USER field. You only need what's before the @. Here it's ```vioadmin``` +- To Ge the Postgres password, it's on the same page but unhide the POSTGRES_PASSWORD field. + +![postgres_username_password.png](images/postgres_username_password.png) + + +### Create Azure Infrastructure + +The following command creates all the Azure IoT infrastructure for VIO. + +```shell +$ ansible-playbook ansible/create_azure_cloud_infrastructure.yml -e 'ansible_python_interpreter=' +``` + +### Deploy hub monitoring (grafana) + +To deploy Grafana [dashboard](../monitoring/dashboards) and [data-sources](../monitoring/provisioning), run the following playbook : +```shell +$ ansible-playbook -i ansible/inventory/production.ini ansible/update_grafana_dashboard.yml --ask-pass +``` + +This will copy the files on the Grafana resource deployed in Azure and relaunch the grafana service to take into account the brand uploaded files. diff --git a/docs/monitoring.md b/docs/hub_monitoring.md similarity index 99% rename from docs/monitoring.md rename to docs/hub_monitoring.md index e6abf811..88352fe9 100644 --- a/docs/monitoring.md +++ b/docs/hub_monitoring.md @@ -1,4 +1,4 @@ -# The hub monitoring +# Hub Monitoring The monitoring is here to help us monitor our IoTHub Devices and Modules via a Grafana dashboard. diff --git a/docs/images/stack-up-with-docker.png b/docs/images/stack-up-with-docker.png index 911c55c1..a94eb234 100644 Binary files a/docs/images/stack-up-with-docker.png and b/docs/images/stack-up-with-docker.png differ diff --git a/docs/index.md b/docs/index.md index f4d2ebc6..1cf7a5e2 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,18 +1,22 @@ # Getting Started -Visual Inspection Orchestrator is a modular framework made to ease the deployment of VI usecases. +Visual Inspection Orchestrator is a modular open source framework made to ease the deployment of VI usecases, initiated by Octo Technology. -Usecase example: Quality check of a product manufactured on an assembly line. +*Usecase example: Quality check of a product manufactured on an assembly line.* +The VIO modules are split between: -## Features +** Edge modules **: The VIO edge modules are deployed close to the object to inspect -- [The edge orchestrator](supervisor.md) +- [The edge orchestrator](edge_orchestrator.md) - [The edge interface](edge_interface.md) -- [The edge model serving](model_serving.md) -- [The hub monitoring](monitoring.md) -- [The deployment tools](deployment.md) +- [The edge model serving](edge_model_serving.md) +- [The edge deployment playbook](edge_deployment.md) +** Hub modules **: The VIO hub modules are deployed in the cloud to collect data and orchestrate the edge fleet + +- [The hub monitoring](hub_monitoring.md) +- [The hub deployment playbook](hub_deployment.md) ## Install the framework @@ -20,22 +24,25 @@ To launch the complete stack, you'll need a minima docker install on your machin `git clone git@github.com:octo-technology/VIO.git` -Note: The VIO docker images will be soon available in a public registry, stay tunned. For now you can download the repository and build the image locally. +Note: The VIO docker images are available [here](https://github.com/orgs/octo-technology/packages?repo_name=VIO) ## Run the stack To launch the stack you can use the [Makefile](https://github.com/octo-technology/VIO/blob/main/Makefile) on the root of the repository which define the different target based on the [docker-compose.yml](https://github.com/octo-technology/VIO/blob/main/docker-compose.yml): -- run all services (supervisor, model-serving, Mongo DB, UI) : `make services-up` -- run the core (supervisor) containerized : `make supervisor` -- run the model serving containerized: `make model_serving` -- run the edge interface containerized : `make ui` -- stop and delete all running services : `make services-down` +- run all edge services (orchestrator, model-serving, interface, db) with local hub monitoring (grafana): `make vio-edge-up` +- stop and delete all running services: `make vio-edge-down` + +In case you want to run a specific module, each module has its own make command: + +- run the edge_orchestrator containerized: `make edge_orchestrator` +- run the edge model serving containerized: `make edge_model_serving` +- run the edge interface containerized: `make edge_interface` -Each of the above target correspond to a command [docker-compose.yml](https://github.com/octo-technology/VIO/blob/main/docker-compose.yml). For example, the target `supervisor` correspond to : +Indeed each of the above target correspond to a command [docker-compose.yml](https://github.com/octo-technology/VIO/blob/main/docker-compose.yml). For example, the target `supervisor` correspond to : ```shell -$ docker-compose up -d --build supervisor +$ docker-compose up -d --build edge_orchestrator ``` To check all services are up and running you can run the command `docker ps`, you should see something like below: @@ -44,17 +51,17 @@ To check all services are up and running you can run the command `docker ps`, yo Once all services are up and running you can access: -- the swagger of the core API (OrchestratoAPI): [http://localhost:8000/docs](http://localhost:8000/docs) -- the swagger of the model serving: [http://localhost:8501/docs](http://localhost:8501/docs) -- the monitoring grafana: [http://localhost:4000/login](http://localhost:4000/login) +- the swagger of the edge orchestrator API (OrchestratoAPI): [http://localhost:8000/docs](http://localhost:8000/docs) +- the swagger of the edge model serving: [http://localhost:8501/docs](http://localhost:8501/docs) +- the hub monitoring: [http://localhost:4000/login](http://localhost:4000/login) - the edge interface: [http://localhost:8080](http://localhost:8080) -From the edge interface you can load a configuration and run the trigger button that will trigger the Core API and launch the following actions: +From the [edge interface](edge_interface.md) you can load a configuration and run the trigger button that will trigger the Orchestrator API and launch the following actions: ![vio-architecture-stack](images/supervisor-actions.png) ## Implementation example -Here you can find an implementation of VIO deployed on Azure managing a fleet of Raspberrys: +Here you can find an implementation of VIO deployed on Azure (vio-hub) managing a fleet of Raspberrys (vio-edge): ![vio-architecture-stack](images/vio_azure_stack.png) diff --git a/docs/overview.md b/docs/overview.md index 74bd019f..cbed26f8 100644 --- a/docs/overview.md +++ b/docs/overview.md @@ -19,12 +19,15 @@ VIO core has been built following the hexagonal architecture patterns, therefore ### Micro-services approach Each sub folders below are indeed a module, an application, an independant micro service. Anyone of them is therefore functional by itself. -Les sous-dossiers du dossier courant, Ă  savoir : -- [The core](supervisor.md) -- [The deployment tools](deployment.md) -- [The fleet monitoring](monitoring.md) +### Edge modules +- [The edge orchestrator](edge_orchestrator.md) - [The edge interface](edge_interface.md) -- [The model serving](model_serving.md) +- [The edge model serving](edge_model_serving.md) +- [The edge deployment playbook](edge_deployment.md) + +### Hub modules +- [The hub monitoring](hub_monitoring.md) +- [The hub deployment playbook](hub_deployment.md) All of those modules have been packages inside a dedicated docker images to facilitate their deployment. diff --git a/mkdocs.yaml b/mkdocs.yaml index 1c8bab7e..41e1be2a 100644 --- a/mkdocs.yaml +++ b/mkdocs.yaml @@ -6,15 +6,16 @@ theme: nav: - Getting Started: index.md - Overview: overview.md - - Edge orchestrator: supervisor.md - - Edge model serving: model_serving.md + - Edge orchestrator: edge_orchestrator.md + - Edge model serving: edge_model_serving.md - Edge interface: edge_interface.md - - Hub monitoring: monitoring.md - - Deployment: deployment.md + - Edge deployment: edge_deployment.md + - Hub monitoring: hub_monitoring.md + - Hub deployment: hub_deployment.md - Contribute: - - Contribute: CONTRIBUTING.md + - Contributing: CONTRIBUTING.md - Organization: ORGANIZATION.md - CI-CD: CICD.md - - Documentation: documentation.md + - Documentation: DOCUMENTATION.md - License: LICENSE.md - Authors: AUTHORS.md