diff --git a/README.md b/README.md index 60290b214..ef40f8f1b 100644 --- a/README.md +++ b/README.md @@ -24,6 +24,7 @@ The AAW includes tools that allow data science users to open almost any file. Th - sqlite - many others... just ask :-) +### How much does the AAW cost? #### CPU Only diff --git a/docs/en/1-Experiments/Kubeflow.md b/docs/en/1-Experiments/Kubeflow.md index ad75dd667..39c13f4e7 100644 --- a/docs/en/1-Experiments/Kubeflow.md +++ b/docs/en/1-Experiments/Kubeflow.md @@ -52,9 +52,11 @@ for your team. ## Image -You will need to choose an image. There are JupyterLab, RStudio, and Ubuntu remote -desktop images available. Select the drop down menu to select additional options -within these (for instance, CPU, PyTorch, and TensorFlow images for JupyterLab). +You will need to choose an image. There are JupyterLab, RStudio, Ubuntu remote +desktop, and SAS images available. The SAS image is only available for StatCan +employees (due to license limitations), the others are available for everyone. +Select the drop down menu to select additional options within these (for +instance, CPU, PyTorch, and TensorFlow images for JupyterLab). Check the name of the images and choose one that matches what you want to do. Don't know which one to choose? Check out your options [here](./Selecting-an-Image.md). @@ -63,33 +65,43 @@ which one to choose? Check out your options [here](./Selecting-an-Image.md). ## CPU and Memory -- At the time of writing (December 23, 2021) there are two types of computers in - the cluster - - - **CPU:** `D16s v3` (16 CPU cores, 64 GiB memory; for user use 15 CPU cores - and 48 GiB memory are available; 1 CPU core and 16 GiB memory reserved for - system use). - - **GPU:** `NC6s_v3` (6 CPU cores, 112 GiB memory, 1 GPU; for user use 96 GiB - memory are available; 16 GiB memory reserved for system use). The available - GPU is the NVIDIA Tesla V100 GPU with specs - [here](https://images.nvidia.com/content/technologies/volta/pdf/volta-v100-datasheet-update-us-1165301-r5.pdf). - - When creating a notebook server, the system will limit you to the maximum - specifications above. For CPU notebook servers, you can specify the exact - amount of CPU and memory that you require. This allows you to meet your - compute needs while minimising cost. For a GPU notebook server, you will - always get the full server (6 CPU cores, 96 GiB accessible memory, and 1 GPU). - See below section on GPUs for information on how to select a GPU server. - - In the future there may be larger machines available, so you may have looser - restrictions. +At the time of writing (December 23, 2021) there are two types of computers in +the cluster + + - **CPU:** `D16s v3` (16 CPU cores, 64 GiB memory; for user use 15 CPU cores + and 48 GiB memory are available; 1 CPU core and 16 GiB memory reserved for + system use). + - **GPU:** `NC6s_v3` (6 CPU cores, 112 GiB memory, 1 GPU; for user use 96 GiB + memory are available; 16 GiB memory reserved for system use). The available + GPU is the NVIDIA Tesla V100 GPU with specs + [here](https://images.nvidia.com/content/technologies/volta/pdf/volta-v100-datasheet-update-us-1165301-r5.pdf). + +When creating a notebook server, the system will limit you to the maximum +specifications above. For CPU notebook servers, you can specify the exact +amount of CPU and memory that you require. This allows you to meet your +compute needs while minimising cost. For a GPU notebook server, you will +always get the full server (6 CPU cores, 96 GiB accessible memory, and 1 GPU). +See below section on GPUs for information on how to select a GPU server. + +In the advanced options, you can select a higher limit than the number of CPU cores and +RAM requested. The amount requested is the amount guaranteed to be available for your +notebook server and you will always pay for at least this much. If the limit is higher +than the amount requested, if additional RAM and CPU cores are available on that shared +server in the cluster your notebook server can use them as needed. One use case for this +is jobs that usually need only one CPU core but can benefit from multithreading to speed +up certain operations. By requesting one CPU core but a higher limit, you can pay much +less for the notebook server while allowing it to use spare unused CPU cores as needed +to speed up computations. + +![Select CPU and RAM](../images/cpu-ram-select.png) ## GPUs If you want a GPU server, select `1` as the number of GPUs and `NVIDIA` as the GPU vendor (the create button will be greyed out until the GPU vendor is selected if -you have a GPU specified). Multi-GPU servers are not currently supported on the -AAW system. +you have a GPU specified). Multi-GPU servers are currently supported on the AAW +system only on a special on-request basis, please contact the AAW maintainers if +you would like a multi-GPU server. ![GPU Configuration](../images/kubeflow_gpu_selection.jpg) @@ -110,10 +122,6 @@ are various configuration options available: - You can specify the size of the workspace volume, from 4 GiB to 32 GiB. -- You can choose the option to not use persistent storage for home, in which case the - home folder will be deleted as soon as the notebook server is closed. Otherwise the - home folder will remain and can be used again for a new notebook server in the future. - ![Create a Workspace Volume](../images/workspace-volume.PNG) @@ -124,17 +132,23 @@ are various configuration options available: ## Data Volumes You can also create data volumes that can be used to store additional data. Multiple -data volumes can be created. Click the add volume button to create a new volume and specify -its configuration. There are the following configuration parameters as for data volumes: - -- **Type**: Create a new volume or use an existing volume. +data volumes can be created. Click the add new volume button to create a new volume and +specify its configuration. Click the attach existing volume button to mount an existing +data volume to the notebook server. There are the following configuration parameters for +data volumes: - **Name**: Name of the volume. - **Size in GiB**: From 4 GiB to 512 GiB. -- **Mount Point**: Path where the data volume can be accessed on the notebook server, by - default `/home/jovyan/`. +- **Mount path**: Path where the data volume can be accessed on the notebook server, by + default `/home/jovyan/vol-1`, `/home/jovyan/vol-2`, etc. (incrementing counter per data + volume mounted). + +When mounting an existing data volume, the name option becomes a drop-down list of the +existing data volumes. Only a volume not currently mounted to an existing notebook server +can be used. The mount path option remains user-configurable with the same defaults as +creating a new volume. The garbage can icon on the right can be used to delete an existing or accidentally created data volume. @@ -152,14 +166,6 @@ There are currently three checkbox options available here: access to any Protected B resources. Protected B notebook servers run with many security restrictions and have access to separate MinIO instances specifically designed for Protected B data. -- **Allow access to Kubeflow Pipelines**: This will allow the notebook server to - create and manage Kubeflow pipelines. Enable this if you want to use Kubeflow - pipelines. - -## Affinity / Tolerations - - -!!! note "This section needs to be filled in." ## Miscellaneous Settings diff --git a/docs/en/3-Pipelines/Kubeflow-Pipelines.md b/docs/en/3-Pipelines/Kubeflow-Pipelines.md index 8e222de35..0f95ca243 100644 --- a/docs/en/3-Pipelines/Kubeflow-Pipelines.md +++ b/docs/en/3-Pipelines/Kubeflow-Pipelines.md @@ -1,5 +1,10 @@ # Overview + +!!! warning "Kubeflow pipelines are in the process of being removed from AAW." + No new development should use Kubeflow pipelines. If you have questions + about this removal, please speak with the AAW maintainers. + [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) are in the process of being removed from AAW. They will be replaced by [Argo Workflows](https://argoproj.github.io/argo-workflows/), which will be implemented on AAW soon. diff --git a/docs/en/4-Collaboration/Geospatial-Analytical-Environment.md b/docs/en/4-Collaboration/Geospatial-Analytical-Environment.md new file mode 100644 index 000000000..208447cfb --- /dev/null +++ b/docs/en/4-Collaboration/Geospatial-Analytical-Environment.md @@ -0,0 +1,190 @@ +# Geospatial Analytical Environment (GAE) - Cross Platform Access + + +??? danger "Unprotected data only; SSI coming soon:" + At this time, our Geospatial server can only host and provide access to non-sensitive statistical information. + +## Getting Started + + +??? success "Prerequisites" + 1. An onboarded project with access to DAS GAE ArcGIS Portal + 2. An ArcGIS Portal Client Id (API Key) + +The ArcGIS Enterprise Portal can be accessed in either the AAW or CAE using the API, from any service which leverages the Python programming language. + +For example, in AAW and the use of [Jupyter Notebooks](https://statcan.github.io/daaas/en/1-Experiments/Jupyter/) within the space, or in CAE the use of [Databricks](https://statcan.github.io/cae-eac/en/DataBricks/), DataFactory, etc. + +[The DAS GAE ArcGIS Enterprise Portal can be accessed directly here](https://geoanalytics.cloud.statcan.ca/portal) + +[For help with self-registering as a DAS Geospatial Portal user](https://statcan.github.io/daaas-dads-geo/english/portal/) + +
+ +## Using the ArcGIS API for Python + +### Connecting to ArcGIS Enterprise Portal using ArcGIS API + +1. Install packages: + + ```python + conda install -c esri arcgis + ``` + + or using Artifactory + + ```python3333 + conda install -c https://jfrog.aaw.cloud.statcan.ca/artifactory/api/conda/esri-remote arcgis + ``` + +2. Import the necessary libraries that you will need in the Notebook. + ```python + from arcgis.gis import GIS + from arcgis.gis import Item + ``` + +3. Access the Portal + Your project group will be provided with a Client ID upon onboarding. Paste the Client ID in between the quotations ```client_id='######'```. + + ```python + gis = GIS("https://geoanalytics.cloud.statcan.ca/portal", client_id=' ') + print("Successfully logged in as: " + gis.properties.user.username) + ``` + +4. - The output will redirect you to a login Portal. + - Use the StatCan Azure Login option, and your Cloud ID + - After successful login, you will receive a code to sign in using SAML. + - Paste this code into the output. + + + ![OAuth2 Approval](../images/OAuth2Key.png) + +
+ +### Display user information +Using the 'me' function, we can display various information about the user logged in. +```python +me = gis.users.me +username = me.username +description = me.description +display(me) +``` + +
+ +### Search for Content +Search for the content you have hosted on the DAaaS Geo Portal. Using the 'me' function we can search for all of the hosted content on the account. There are multiple ways to search for content. Two different methods are outlined below. + +**Search all of your hosted itmes in the DAaaS Geo Portal.** +```python +my_content = me.items() +my_content +``` +**Search for specific content you own in the DAaaS Geo Portal.** + +This is similar to the example above, however if you know the title of they layer you want to use, you can save it as a function. +```python +my_items = me.items() +for items in my_items: + print(items.title, " | ", items.type) + if items.title == "Flood in Sorel-Tracy": + flood_item = items + + else: + continue +print(flood_item) +``` + +**Search all content you have access to, not just your own.** + +```python +flood_item = gis.content.search("tags: flood", item_type ="Feature Service") +flood_item +``` + +
+ +### Get Content +We need to get the item from the DAaaS Geo Portal in order to use it in the Jupyter Notebook. This is done by providing the unique identification number of the item you want to use. Three examples are outlined below, all accessing the identical layer. +```python +item1 = gis.content.get(my_content[5].id) #from searching your content above +display(item1) + +item2 = gis.content.get(flood_item.id) #from example above -searching for specific content +display(item2) + +item3 = gis.content.get('edebfe03764b497f90cda5f0bfe727e2') #the actual content id number +display(item3) +``` + +
+ +### Perform Analysis +Once the layers are brought into the Jupyter notebook, we are able to perform similar types of analysis you would expect to find in a GIS software such as ArcGIS. There are many modules containing many sub-modules of which can perform multiple types of analyses. +
+ +Using the arcgis.features module, import the use_proximity submodule ```from arcgis.features import use_proximity```. This submodule allows us to '.create_buffers' - areas of equal distance from features. Here, we specify the layer we want to use, distance, units, and output name (you may also specify other characteristics such as field, ring type, end type, and others). By specifying an output name, after running the buffer command, a new layer will be automatically uploaded into the DAaaS GEO Portal containing the new feature you just created. +
+ +```python +buffer_lyr = use_proximity.create_buffers(item1, distances=[1], + units = "Kilometers", + output_name='item1_buffer') + +display(item1_buffer) +``` + +Some users prefer to work with Open-Source packages. Translating from ArcGIS to Spatial Dataframes is simple. +```python +# create a Spatially Enabled DataFrame object +sdf = pd.DataFrame.spatial.from_layer(feature_layer) +``` + +
+ +### Update Items +By getting the item as we did similar to the example above, we can use the '.update' function to update exisiting item within the DAaaS GEO Portal. We can update item properties, data, thumbnails, and metadata. +```python +item1_buffer = gis.content.get('c60c7e57bdb846dnbd7c8226c80414d2') +item1_buffer.update(item_properties={'title': 'Enter Title' + 'tags': 'tag1, tag2, tag3, tag4', + 'description': 'Enter description of item'} +``` + +
+ +### Visualize Your Data on an Interactive Map + +**Example: MatplotLib Library** +In the code below, we create an ax object, which is a map style plot. We then plot our data ('Population Change') change column on the axes +```python +import matplotlib.pyplot as plt +ax = sdf.boundary.plot(figsize=(10, 5)) +shape.plot(ax=ax, column='Population Change', legend=True) +plt.show() +``` + +**Example: ipyleaflet Library** +In this example we will use the library 'ipyleaflet' to create an interactive map. This map will be centered around Toronto, ON. The data being used will be outlined below. +Begin by pasting ```conda install -c conda-forge ipyleaflet``` allowing you to install ipyleaflet libraries in the Python environment. +
+Import the necessary libraries. +```python +import ipyleaflet +from ipyleaflet import * +``` +Now that we have imported the ipyleaflet module, we can create a simple map by specifying the latitude and longitude of the location we want, zoom level, and basemap [(more basemaps)](https://ipyleaflet.readthedocs.io/en/latest/map_and_basemaps/basemaps.html). Extra controls have been added such as layers and scale. +```python +toronto_map = Map(center=[43.69, -79.35], zoom=11, basemap=basemaps.Esri.WorldStreetMap) + +toronto_map.add_control(LayersControl(position='topright')) +toronto_map.add_control(ScaleControl(position='bottomleft')) +toronto_map +``` +
+ +##Learn More about the ArcGIS API for Python +[Full documentation for the ArGIS API can be located here](https://developers.arcgis.com/python/) + +##Learn More about DAS Geospatial Analytical Environment (GAE) and Services +[GAE Help Guide](https://statcan.github.io/daaas-dads-geo/) \ No newline at end of file diff --git a/docs/en/5-Storage/MinIO.md b/docs/en/5-Storage/MinIO.md index 0cf328abd..213761ad4 100644 --- a/docs/en/5-Storage/MinIO.md +++ b/docs/en/5-Storage/MinIO.md @@ -16,6 +16,7 @@ S3 storage). Buckets are good at three things: ## MinIO Mounted Folders on a Notebook Server + !!! warning "MinIO mounts are not currently working on Protected B servers." Your MinIO storage are mounted as directories if you select the option @@ -104,6 +105,7 @@ This lets you browse, upload/download, delete, or share files. ## Browse Datasets + !!! warning "The link below is not currently working." Browse some [datasets](https://datasets.covid.cloud.statcan.ca) here. These data @@ -218,6 +220,7 @@ send to a collaborator! ## Get MinIO Credentials + !!! warning "The methods below have not been tested recently, since certain MinIO changes. These may require adjustment." diff --git a/docs/en/6-Gitlab/Gitlab.md b/docs/en/6-Gitlab/Gitlab.md new file mode 100644 index 000000000..63924edb2 --- /dev/null +++ b/docs/en/6-Gitlab/Gitlab.md @@ -0,0 +1,35 @@ +![image](https://user-images.githubusercontent.com/23174198/217059855-1ea17aba-d722-467e-96f4-0a37eda3f035.png) + +Thankfully, using the cloud main gitlab on the AAW is just like how you would regularly use git. + +### Step 1: Locate the Git repo you want to clone and copy the clone with HTTPS option +If your repository is private, you will need to also do Step 4 (Creating a Personal Access Token) for this to go through. +For me this was a test repo +![image](https://user-images.githubusercontent.com/23174198/217060353-ba229ced-b5c1-4eae-8878-9608835cc65f.png) + +### Step 2: Paste the copied link into one of your workspace servers +![image](https://user-images.githubusercontent.com/23174198/217060697-535df6c1-d9bb-4bc3-a42b-9f085a5386d5.png) + +### Step 3: Success! +As seen in the above screenshot I have cloned the repo! + +### Step 4: Create a Personal Access Token for pushing (also used if pulling from a private repository) +If you try to `git push ....` you will encounter an error eventually leading you to the [gitlab help documentation](https://gitlab.k8s.cloud.statcan.ca/help/user/profile/account/two_factor_authentication.md#error-http-basic-access-denied-the-provided-password-or-token-) + +You will need to make a Personal Access Token for this. To achieve this go in gitlab, click your profile icon and then hit `Preferences` and then `Access Tokens` +![image](https://user-images.githubusercontent.com/23174198/217061060-122dded8-dc80-46ce-a907-a85913cf5dd7.png) +Follow the prompts entering the name, the token expiration date and granting the token permissions (I granted `write_repository`) + +### Step 5: Personalize `Git` to be you +Run `git config user.email ....` and `git config user.name ...` to match your gitlab identity. + +### Step 6: Supply the Generated Token when asked for your password +The token will by copy-able at the top once you hit `Create personal access token` at the bottom +![image](https://user-images.githubusercontent.com/23174198/217062846-03a715f1-ded5-4d80-ad4b-c647ae5e30fd.png) + +Once you have prepared everything it's time +![image](https://user-images.githubusercontent.com/23174198/217063198-c1bd6c3a-ebc5-444d-98ba-24ef32faa20e.png) + +### Step 7: See the results of your hard work in gitlab +![image](https://user-images.githubusercontent.com/23174198/217063990-efaa8e81-a0eb-4b6d-842e-2ca3112bb4f7.png) + diff --git a/docs/en/images/OAuth2Key.png b/docs/en/images/OAuth2Key.png new file mode 100644 index 000000000..8336c826f Binary files /dev/null and b/docs/en/images/OAuth2Key.png differ diff --git a/docs/en/images/cpu-ram-select.png b/docs/en/images/cpu-ram-select.png new file mode 100644 index 000000000..9ceaa1be0 Binary files /dev/null and b/docs/en/images/cpu-ram-select.png differ diff --git a/docs/en/images/jupyterlab-dataflow.jpg b/docs/en/images/jupyterlab-dataflow.jpg new file mode 100644 index 000000000..f14258051 Binary files /dev/null and b/docs/en/images/jupyterlab-dataflow.jpg differ diff --git a/docs/en/images/kubeflow_volumes.png b/docs/en/images/kubeflow_volumes.png index 88d13696c..a609f07ee 100644 Binary files a/docs/en/images/kubeflow_volumes.png and b/docs/en/images/kubeflow_volumes.png differ diff --git a/docs/en/images/select-image-screenshot.PNG b/docs/en/images/select-image-screenshot.PNG index 24d50542c..1566d2453 100644 Binary files a/docs/en/images/select-image-screenshot.PNG and b/docs/en/images/select-image-screenshot.PNG differ diff --git a/docs/en/images/workspace-volume.PNG b/docs/en/images/workspace-volume.PNG index 35a178a06..42b153d98 100644 Binary files a/docs/en/images/workspace-volume.PNG and b/docs/en/images/workspace-volume.PNG differ diff --git "a/docs/fr/4-Collaboration/Aper\303\247u.md" "b/docs/fr/4-Collaboration/Aper\303\247u.md" index 4df480f4d..8eff89de9 100644 --- "a/docs/fr/4-Collaboration/Aper\303\247u.md" +++ "b/docs/fr/4-Collaboration/Aper\303\247u.md" @@ -109,5 +109,5 @@ possédez déjà via le Menu **Gérer les contributeurs** dans Kubeflow. ![Contributors Menu](../images/kubeflow_contributors.png) - info "Maintenant, vous et vos collègues pouvez partager l'accès à un serveur!" +!!! info "Maintenant, vous et vos collègues pouvez partager l'accès à un serveur!" Essayer le! diff --git a/docs/fr/4-Collaboration/Environnement-Analyse-Geospatiale.md b/docs/fr/4-Collaboration/Environnement-Analyse-Geospatiale.md new file mode 100644 index 000000000..4c90edb9c --- /dev/null +++ b/docs/fr/4-Collaboration/Environnement-Analyse-Geospatiale.md @@ -0,0 +1,189 @@ +# Environnement d'Analyse Géospatiale (EAG) - Accès multiplateforme + + +??? danger "Données non protégées uniquement ; SSI bientôt disponible :" + À l'heure actuelle, notre serveur géospatial ne peut héberger et donner accès qu'à des informations statistiques non sensibles. + +## Démarrage + + +??? success "Conditions préalables" + 1. Un projet intégré avec accès au portail DAS EAG ArcGIS + 2. Un identifiant client ArcGIS Portal (clé API) + +Le portail ArcGIS Enterprise est accessible dans AAW ou CAE à l'aide de l'API, à partir de n'importe quel service qui exploite le langage de programmation Python. + +Par exemple, dans AAW et l'utilisation de [Jupyter Notebooks](https://statcan.github.io/daaas/en/1-Experiments/Jupyter/) dans l'espace, ou pour CAE l'utilisation de [Databricks](https ://statcan.github.io/cae-eac/en/DataBricks/), DataFactory, etc. + +[Le portail DAS GAE ArcGIS Enterprise est accessible directement ici](https://geoanalytics.cloud.statcan.ca/portal) + +[Pour obtenir de l'aide sur l'auto-inscription en tant qu'utilisateur du portail géospatial DAS](https://statcan.github.io/daaas-dads-geo/english/portal/) + +
+ +## Utilisation de l'API ArcGIS pour Python + +### Connexion à ArcGIS Enterprise Portal à l'aide de l'API ArcGIS + +1. Installez les packages : + + ```python + conda install -c esri arcgis + ``` + + ou utilisez Artifactory + + ```python3333 + conda install -c https://jfrog.aaw.cloud.statcan.ca/artifactory/api/conda/esri-remote arcgis + ``` + +2. Importez les librairies nécessaires dont vous aurez besoin dans le Notebook. + ```python + from arcgis.gis import GIS + from arcgis.gis import Item + ``` + +3. Accéder au portail + Votre groupe de projet recevra un identifiant client lors de l'intégration. Collez l'ID client entre les guillemets```client_id='######'```. + + ```python + gis = GIS("https://geoanalytics.cloud.statcan.ca/portal", client_id=' ') + print("Successfully logged in as: " + gis.properties.user.username) + ``` + +4. - La sortie vous redirigera vers un portail de connexion. + - Utilisez l'option de connexion Azure de StatCan et votre identifiant Cloud + - Après une connexion réussie, vous recevrez un code pour vous connecter en utilisant SAML. + - Collez ce code dans la sortie. + + ![Approbation OAuth2](../images/OAuth2Key.png) + +
+ +### Afficher les informations utilisateur +En utilisant la fonction "me", nous pouvons afficher diverses informations sur l'utilisateur connecté. +```python +me = gis.users.me +username = me.username +description = me.description +display(me) +``` + +
+ +### Rechercher du contenu +Recherchez le contenu que vous avez hébergé sur le portail géo DAaaS. En utilisant la fonction "me", nous pouvons rechercher tout le contenu hébergé sur le compte. Il existe plusieurs façons de rechercher du contenu. Deux méthodes différentes sont décrites ci-dessous. + +**Recherchez tous vos éléments hébergés dans le portail géographique DAaaS.** +```python +my_content = me.items() +my_content +``` +**Recherchez du contenu spécifique que vous possédez dans le portail géographique DAaaS.** + +Ceci est similaire à l'exemple ci-dessus, mais si vous connaissez le titre de la couche que vous souhaitez utiliser, vous pouvez l'enregistrer en tant que fonction. +```python +my_items = me.items() +for items in my_items: + print(items.title, " | ", items.type) + if items.title == "Flood in Sorel-Tracy": + flood_item = items + + else: + continue +print(flood_item) +``` + +**Recherchez tout le contenu auquel vous avez accès, pas seulement le vôtre.** + +```python +flood_item = gis.content.search("tags: flood", item_type ="Feature Service") +flood_item +``` + +
+ +### Obtenir du contenu +Nous devons obtenir l'élément du portail géographique DAaaS afin de l'utiliser dans le bloc-notes Jupyter. Cela se fait en fournissant le numéro d'identification unique de l'article que vous souhaitez utiliser. Trois exemples sont décrits ci-dessous, tous accédant à la même couche. +```python +item1 = gis.content.get(my_content[5].id) #from searching your content above +display(item1) + +item2 = gis.content.get(flood_item.id) #from example above -searching for specific content +display(item2) + +item3 = gis.content.get('edebfe03764b497f90cda5f0bfe727e2') #the actual content id number +display(item3) +``` + +
+ +### Effectuer une analyse +Une fois les couches importées dans le bloc-notes Jupyter, nous sommes en mesure d'effectuer des types d'analyse similaires à ceux que vous vous attendriez à trouver dans un logiciel SIG tel qu'ArcGIS. Il existe de nombreux modules contenant de nombreux sous-modules qui peuvent effectuer plusieurs types d'analyses. +
+ +À l'aide du module arcgis.features, importez le sous-module use_proximity ```from arcgis.features import use_proximity```. Ce sous-module nous permet de '.create_buffers' - des zones à égale distance des entités. Ici, nous spécifions la couche que nous voulons utiliser, la distance, les unités et le nom de sortie (vous pouvez également spécifier d'autres caractéristiques telles que le champ, le type d'anneau, le type de fin et autres). En spécifiant un nom de sortie, après avoir exécuté la commande buffer, une nouvelle couche sera automatiquement téléchargée dans le portail DAaaS GEO contenant la nouvelle fonctionnalité que vous venez de créer. +
+ +```python +buffer_lyr = use_proximity.create_buffers(item1, distances=[1], + units = "Kilometers", + output_name='item1_buffer') + +display(item1_buffer) +``` + +Certains utilisateurs préfèrent travailler avec des packages Open Source. La traduction d'ArcGIS vers Spatial Dataframes est simple. +```python +# create a Spatially Enabled DataFrame object +sdf = pd.DataFrame.spatial.from_layer(feature_layer) +``` + +
+ +### Mettre à jour les éléments +En obtenant l'élément comme nous l'avons fait similaire à l'exemple ci-dessus, nous pouvons utiliser la fonction '.update' pour mettre à jour l'élément existant dans le portail DAaaS GEO. Nous pouvons mettre à jour les propriétés, les données, les vignettes et les métadonnées des éléments. +```python +item1_buffer = gis.content.get('c60c7e57bdb846dnbd7c8226c80414d2') +item1_buffer.update(item_properties={'title': 'Enter Title' + 'tags': 'tag1, tag2, tag3, tag4', + 'description': 'Enter description of item'} +``` + +
+ +### Visualisez vos données sur une carte interactive + +**Exemple : Librairie MatplotLib** +Dans le code ci-dessous, nous créons un objet ax, qui est un tracé de style carte. Nous traçons ensuite notre colonne de changement de données ("Population Change") sur les axes +```python +import matplotlib.pyplot as plt +ax = sdf.boundary.plot(figsize=(10, 5)) +shape.plot(ax=ax, column='Population Change', legend=True) +plt.show() +``` + +**Exemple : librairie ipyleaflet** +Dans cet exemple, nous utiliserons la librairie 'ipyleaflet' pour créer une carte interactive. Cette carte sera centrée autour de Toronto, ON. Les données utilisées seront décrites ci-dessous. +Commencez par coller ```conda install -c conda-forge ipyleaflet``` vous permettant d'installer les librairies ipyleaflet dans l'environnement Python. +
+Import the necessary libraries. +```python +import ipyleaflet +from ipyleaflet import * +``` +Maintenant que nous avons importé le module ipyleaflet, nous pouvons créer une carte simple en spécifiant la latitude et la longitude de l'emplacement que nous voulons, le niveau de zoom et le fond de carte [(plus de fonds de carte)](https://ipyleaflet.readthedocs.io/en /latest/map_and_basemaps/basemaps.html). Des contrôles supplémentaires ont été ajoutés tels que les calques et l'échelle. +```python +toronto_map = Map(center=[43.69, -79.35], zoom=11, basemap=basemaps.Esri.WorldStreetMap) + +toronto_map.add_control(LayersControl(position='topright')) +toronto_map.add_control(ScaleControl(position='bottomleft')) +toronto_map +``` +
+ +##En savoir plus sur l'API ArcGIS pour Python +[La documentation complète de l'API ArGIS peut être trouvée ici](https://developers.arcgis.com/python/) + +##En savoir plus sur l'environnement analytique géospatial (GAE) et les services DAS +[Guide d'aide GAE](https://statcan.github.io/daaas-dads-geo/) \ No newline at end of file diff --git a/docs/fr/images/OAuth2Key.png b/docs/fr/images/OAuth2Key.png new file mode 100644 index 000000000..8336c826f Binary files /dev/null and b/docs/fr/images/OAuth2Key.png differ diff --git a/mkdocs-en.yml b/mkdocs-en.yml index 1454122fe..2d1baa4b4 100644 --- a/mkdocs-en.yml +++ b/mkdocs-en.yml @@ -60,8 +60,11 @@ nav: - Collaboration: - Overview: 4-Collaboration/Overview.md - Request a Namespace: 4-Collaboration/Request-a-Namespace.md + - GEA: 4-Collaboration/Geospatial-Analytical-Environment.md - Storage: - Overview: 5-Storage/Overview.md - Disks: 5-Storage/Disks.md - MinIO: 5-Storage/MinIO.md + - Gitlab: + - Gitlab: 6-Gitlab/Gitlab.md - Help/Contact: Help.md diff --git a/mkdocs-fr.yml b/mkdocs-fr.yml index f08676e53..24e0bfd06 100644 --- a/mkdocs-fr.yml +++ b/mkdocs-fr.yml @@ -62,6 +62,7 @@ nav: - Collaboration: - Aperçu: 4-Collaboration/Aperçu.md - Demander un espace de nom: 4-Collaboration/Demander-EspaceDeNom.md + - EAG: 4-Collaboration/Environnement-Analyse-Geospatiale.md - Stockage: - Aperçu: 5-Stockage/Aperçu.md - MinIo: 5-Stockage/MinIo.md