diff --git a/.gitignore b/.gitignore index fd5d9d55d..e71fe895a 100644 --- a/.gitignore +++ b/.gitignore @@ -21,3 +21,6 @@ yarn-error.log* #TypeSense /typesense_data + +#Migration Folders +/static/_static/images diff --git a/docs/best-practices/computing/elastic-cloud-server/building-highly-available-web-server-clusters-with-keepalived.md b/docs/best-practices/computing/elastic-cloud-server/building-highly-available-web-server-clusters-with-keepalived.md new file mode 100644 index 000000000..9d96dd1be --- /dev/null +++ b/docs/best-practices/computing/elastic-cloud-server/building-highly-available-web-server-clusters-with-keepalived.md @@ -0,0 +1,526 @@ +--- +id: building-highly-available-web-server-clusters-with-keepalived +title: Building Highly Available Web Server Clusters with Keepalived +tags: [ecs, high-availability, keepalived, cluster] +--- + +# Building Highly Available Web Server Clusters with Keepalived + +Virtual IP addresses are used for active and standby switchover of ECSs +to achieve high availability. This way if one ECS goes down for some +reason, the other one can take over and services continue uninterrupted. This article uses CentOS Stream +release 9 ECSs as an example to describe how to set up highly available web server clusters using +Keepalived and Nginx. + +## Solution Design + +A web cluster consists of multiple web servers and a load balancer. +Access requests will first be received by the load balancer, which then +distributes the requests to backend web servers based on the load +balancing policy.In this document, Nginx is used to implement load balancing. + +### Network Topology + +The data planning could follow one of the implementations below: + +|No. |Item|Quantity |Specification | +|------------------|----|-------------------------------------------|------------------------------| +|1 |VPC |1 |192.168.0.0/16 | +||Subnet |1 |192.168.0.0/24 | +|2 |ECS |2 |1 vCPU, 1 GB, CentOS Stream release 9| +||IP address |2 |ecs-HA1: 192.168.0.10 ecs-HA2: 192.168.0.20| +|3 |EIP |1 |80.158.xxx.xxx | +||Virtual IP address|1 |192.168.0.100 | + +- Configure the two ECSs in the same subnet to work in the active/standby mode using Keepalived. +- Bind a single virtual IP address to the two ECSs. +- Bind the virtual IP address to an EIP, then you can access the active and standby ECSs bound with the virtual IP address from the Internet. + +![**Figure 1** Network topology](/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681028.png) + +:::note + +- Select a region based on your service requirements. +- All cloud resources **must be in the same region**. + +::: + +### Procedure + +The overall operation process is as follows: + +![**Figure 2** Operation process](/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681029.png) + +## Creating the Cluster + +1. *Create a VPC and a subnet.* + - Log in to the management console. + - Click *Service List*. Under *Networking*, click *Virtual Private Cloud*. + - Click *Create VPC*. + Set required parameters as prompted based on the following table: + + |Parameter |Example Value | + |--------------------------|--------------| + |Name (of the VPC) |vpc-HA | + |CIDR Block (of the VPC) |192.168.0.0/16| + |Name (of the subnet) |subnet-HA | + |CIDR Block (of the subnet)|192.168.0.0/24| + + - Click *Create Now*. +2. *Apply for required cloud resources.* + a. Create ECSs. + 1. Log in to the management console. + 2. Click *Service List*. Under *Computing*, click *Elastic Cloud Server*. + 3. Click *Create ECS*. + 4. On the *Create ECS* page, set parameters as prompted. For details, see + 5. Set the ECS name to ecs-HA1 and ecs-HA2. + + :::note + In this example, no data disk is purchased. You can buy data + disks based on service requirements and ensure their service + data consistency. + ::: + + 1. (Optional) Configure security group rules to ensure that the + two ECSs can communicate with each other. + + In this example, the two ECSs are in the same security group + and can communicate with each other through the internal + network by default. In this case, you do not need to + configure rules. + + If two ECSs are in different security groups, you need to + add inbound security group rules for the two ECSs. For + details, see [Enabling ECSs in Different Security Groups to + Communicate with Each Other Through an Internal + Network](https://docs.otc.t-systems.com/elastic-cloud-server/umn/security/security_groups/security_group_configuration_examples.html#en-us-topic-0140323152-en-us-topic-0118534011-section14197522283). + + ![**Figure 3** Add Inbound Rule](/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001176019342.png) + b. Assign an EIP. + 2. Log in to the management console. + 3. Click *Service List*. Under *Network*, click *Elastic IP*. + 4. Click *Assign EIP* and set parameters as prompted. For + details, see + `Table 1 `. + c. Assign a virtual IP address. + 5. Log in to the management console. + 6. Click *Service List*. Under *Network*, click *Virtual + Private Cloud*. + 7. In the navigation pane on the left, click *Subnets*. + 8. In the subnet list, locate the target subnet and click its + name. + 9. On the *IP Addresses* tab page, click *Assign Virtual IP + Address* and set parameters as prompted. +3. *Configure the ECSs.* + a. Configure the ecs-HA1. + 1. Bind EIP (`80.158.xxx.xxx`) to `ecs-HA1`. + + 1. Log in to the management console. + 2. Click *Service List*. Under *Computing*, click + *Elastic Cloud Server*. + 3. In the ECS list, click the name of `ecs-HA1`. + 4. Click the *EIPs* tab and then *Bind EIP*. + 5. On the *Bind EIP* page, select a NIC and an EIP, and + click *OK*. + + 2. Connect to `ecs-HA1` using SSH and run the following command + to install the Nginx and Keepalived packages and related + dependency packages: + + ```bash + yum install nginx keepalived -y + ``` + 3. Run the following command to edit the **nginx** + configuration file and save it: + + ```bash + **vim /etc/nginx/nginx.conf** + ``` + + An example is provided as follows: + + ```bash + user root; + worker_processes 1; + #error_log logs/error.log; + #error_log logs/error.log notice; + #error_log logs/error.log info; + #pid logs/nginx.pid; + events { + worker_connections 1024; + } + http { + include mime.types; + default_type application/octet-stream; + #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + # '$status $body_bytes_sent "$http_referer" ' + # '"$http_user_agent" "$http_x_forwarded_for"'; + #access_log logs/access.log main; + sendfile on; + #tcp_nopush on; + #keepalive_timeout 0; + keepalive_timeout 65; + #gzip on; + server { + listen 80; + server_name localhost; + #charset koi8-r; + #access_log logs/host.access.log main; + location / { + root html; + index index.html index.html; + } + #error_page 404 /404.html; + # redirect server error pages to the static page /50x.html + error_page 500 502 503 504 /50x.html; + location = /50x.html { + root html; + } + + } + } + ``` + + 4. Run the following command to edit the **index.html** file + and save the file: + + ```bash + vim /usr/share/nginx/html/index.html + ``` + + An example is provided as follows: + + ``` + Welcome to ECS-HA1 + ``` + + 5. Run the following commands to set the automatic startup of + Nginx upon ECS startup: + + ```bash + systemctl enable nginx + systemctl start nginx.service + ``` + + 6. Verify the access to a single Nginx node. + + ![**Figure 4** ECS-HA1 access verification](/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681030.png) + + 7. Run the following command to edit the **keepalived** + configuration file and save it: + + ```bash + vim /etc/keepalived/keepalived.conf + ``` + + An example is provided as follows: + + ```bash + ! Configuration File for keepalived + global_defs { + router_id master-node + } + vrrp_script chk_http_port { + script "/etc/keepalived/chk_nginx.sh" + interval 2 + weight -5 + fall 2 + rise 1 + } + vrrp_instance VI_1 { + state MASTER + interface ens3 + mcast_src_ip 192.168.0.10 + virtual_router_id 51 + priority 101 + advert_int 1 + authentication { + auth_type PASS + auth_pass 1111 + } + unicast_src_ip 192.168.0.10 + virtual_ipaddress { + 192.168.0.100 + } + track_script { + chk_http_port + } + } + ``` + + 8. Run the following command to edit the **nginx** monitoring + script and save it: + + ```bash + vim /etc/keepalived/chk_nginx.sh + ``` + + An example is provided as follows: + + ```bash + #!/bin/bash + counter=$(ps -C nginx --no-heading|wc -l) + if [ "${counter}" = "0" ]; then + systemctl start nginx.service + sleep 2 + counter=$(ps -C nginx --no-heading|wc -l) + if [ "${counter}" = "0" ]; then + systemctl stop keepalived.service + fi + fi + ``` + + ```bash + chmod +x /etc/keepalived/chk_nginx.sh adduser keepalived_script + ``` + + 9. Run the following commands to set the automatic startup of + Keepalived upon ECS startup: + + ```bash + systemctl enable keepalived + systemctl start keepalived.service + ``` + + b. Configure the `ecs-HA2`. + 10. Unbind EIP (`80.158.xxx.xxx`) from `ecs-HA1`. + + 1. Log in to the management console. + 2. Click **Service List**. Under **Computing**, click + **Elastic Cloud Server**. + 3. In the ECS list, click the name of ecs-HA1. + 4. Click the **EIPs** tab. + 5. Locate the row that contains the EIP (80.158.xxx.xxx), + and click **Unbind**. + + 11. Bind EIP (80.158.xxx.xxx) to ecs-HA2. + + 1. Log in to the management console. + 2. Click **Service List**. Under **Computing**, click + **Elastic Cloud Server**. + 3. In the ECS list, click the name of ecs-HA2. + 4. Click the **EIPs** tab. + 5. Click **Bind EIP**. + 6. Select a NIC and an EIP and click **OK**. + + 12. Connect to ecs-HA2 using SSH and run the following command + to install the Nginx and Keepalived packages and related + dependency packages: + + **yum install nginx keepalived -y** + + 13. Run the following command to edit the **nginx.conf** + configuration file: + + vim /etc/nginx/nginx.conf + + An example is provided as follows: + + ``` + user root; + worker_processes 1; + #error_log logs/error.log; + #error_log logs/error.log notice; + #error_log logs/error.log info; + #pid logs/nginx.pid; + events { + worker_connections 1024; + } + http { + include mime.types; + default_type application/octet-stream; + #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + # '$status $body_bytes_sent "$http_referer" ' + # '"$http_user_agent" "$http_x_forwarded_for"'; + #access_log logs/access.log main; + sendfile on; + #tcp_nopush on; + #keepalive_timeout 0; + keepalive_timeout 65; + #gzip on; + server { + listen 80; + server_name localhost; + #charset koi8-r; + #access_log logs/host.access.log main; + location / { + root html; + index index.html index.htm; + } + #error_page 404 /404.html; + # redirect server error pages to the static page /50x.html + error_page 500 502 503 504 /50x.html; + location = /50x.html { + root html; + } + } + } + ``` + + 14. Run the following command to edit the **index.html** file: + + **vim /usr/share/nginx/html/index.html** + + An example is provided as follows: + + ``` + Welcome to ECS-HA2 + ``` + + 15. Run the following commands to set the automatic startup of + Nginx upon ECS startup: + + **systemctl enable nginx** + + **systemctl start nginx.service** + + 16. Test the access to a single Nginx node. + + ![**Figure 5** ECS-HA2 verification result](/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681031.png) + + 17. Run the following command to edit the Keepalived + configuration file: + + **vim /etc/keepalived/keepalived.conf** + + An example is provided as follows: + + ``` + ! Configuration File for keepalived + global_defs { + router_id master-node + } + vrrp_script chk_http_port { + script "/etc/keepalived/chk_nginx.sh" + interval 2 + weight -5 + fall 2 + rise 1 + } + vrrp_instance VI_1 { + state BACKUP + interface ens3 + mcast_src_ip 192.168.0.20 + virtual_router_id 51 + priority 100 + advert_int 1 + authentication { + auth_type PASS + auth_pass 1111 + } + unicast_src_ip 192.168.0.20 + virtual_ipaddress { + 192.168.0.100 + } + track_script { + chk_http_port + } + } + ``` + + 18. Run the following command to edit the **nginx** monitoring + script and add execute permissions: + + **vim /etc/keepalived/chk_nginx.sh** + + An example is provided as follows: + + ``` + #!/bin/bash + counter=$(ps -C nginx --no-heading|wc -l) + if [ "${counter}" = "0" ]; then + systemctl start nginx.service + sleep 2 + counter=$(ps -C nginx --no-heading|wc -l) + if [ "${counter}" = "0" ]; then + systemctl stop keepalived.service + fi + fi + ``` + + **chmod +x /etc/keepalived/chk_nginx.sh** **adduser + keepalived_script** + + 19. Run the following commands to set the automatic startup of + Keepalived upon ECS startup: + + **systemctl enable keepalived** **systemctl start + keepalived** +4. **Bind a virtual IP address to an ECS.** + a. Unbind EIP (80.158.xxx.xxx) from ecs-HA2. + + b. Bind the virtual IP address to ecs-HA1. + + 1. Log in to the management console. + 2. Click **Service List**. Under **Network**, click **Virtual + Private Cloud**. + 3. In the navigation pane on the left, click **Subnets**. + 4. In the subnet list, locate the target subnet and click its + name. + 5. Click the **IP Addresses** tab, locate the row that contains + the target virtual IP address, and click **Bind to Server** + in the **Operation** column. + 6. On the page that is displayed, select ecs HA1. + 7. Bind the virtual IP address to ecs HA1. For details, see + [Binding a Virtual IP Address to an EIP or + ECS](https://docs.otc.t-systems.com/virtual-private-cloud/umn/virtual_ip_address/binding_a_virtual_ip_address_to_an_eip_or_ecs.html). + + c. Bind the virtual IP address to ecs-HA2 by referring to + + d. Bind the virtual IP address to the EIP `80.158.xxx.xxx`. + 8. Log in to the management console. + 9. Click *Service List*. Under *Network*, click *Virtual + Private Cloud*. + 10. In the navigation pane on the left, click *Subnets*. + 11. In the subnet list, locate the target subnet and click its + name. + 12. Click the *IP Addresses* tab, locate the row that contains + the target virtual IP address, and click *Bind to EIP* in + the *Operation* column. + 13. On the page that is displayed, select the EIP + (`80.158.xxx.xxx`). + 14. Click *OK*. + +## Verifying High Availability + +1. Restart ecs-HA1 and ecs-HA2 through the management console. + +2. Remotely log in to ecs-HA1 through the management console. + +3. Run the following command to check whether the virtual IP address is + bound to the eth0 NIC of ecs-HA1: + + **ip addr show** + + As shown in + `Figure 6 `, the virtual IP address has been bound to the `ens3` NIC + of `ecs-HA1`. + + ![**Figure 6** Virtual IP address of ecs-HA1](/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681032.png) + +4. Use a browser to access the EIP and check whether the web page on + ecs-HA1 can be accessed. + + If the information shown in + `Figure 7 ` is displayed, the access is normal. + + ![**Figure 7** ecs-HA1 access verification](/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681030.png) + +5. Run the following command to disable Keepalived on ecs-HA1: + + **systemctl stop keepalived.service** + +6. Run the following command to check whether ecs-HA2 has taken over + the virtual IP address: + + **ip addr show** + + ![**Figure 8** Virtual IP address of ecs-HA2](/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681032.png) + +7. Use a browser to access the EIP and check whether the web page on + ecs-HA2 can be accessed. + + If the information shown in + `Figure 9 `is displayed, the access is normal. + + ![**Figure 9** ecs-HA2 access verification](/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681031.png) diff --git a/docs/best-practices/computing/image-management-service/migrating-service-data-across-accounts-data-disks.md b/docs/best-practices/computing/image-management-service/migrating-service-data-across-accounts-data-disks.md new file mode 100644 index 000000000..158f6691f --- /dev/null +++ b/docs/best-practices/computing/image-management-service/migrating-service-data-across-accounts-data-disks.md @@ -0,0 +1,168 @@ +--- +id: migrating-service-data-across-accounts-data-disks +title: Migrating Service Data Across Accounts (Data Disks) +tags: [ims, migration] +--- + +# Migrating Service Data Across Accounts (Data Disks) + +Generally, service data is stored on data disks. To migrate the data +across accounts, you need to create data disk images and share them with +the target account. This section uses Linux as an example to describe +how to migrate service data (only data disks) across accounts in the +same region. + +## Solution Design + +To migrate service data stored on a data disk across accounts, create an +image for the data disk, share the image with the target account. The +target account accepts the shared image and attaches the new data disk +created from the shared image to an existing or new ECS. + +![*Figure 1* Migration process](/img/docs/best-practices/computing/image-management-service/en-us_image_0295094264.png) + +## Creating a Data Disk Image + +Assume that *hello-world.txt* is stored on the data disk of your ECS +and you want to migrate the file to another account. + +![image1](/img/docs/best-practices/computing/image-management-service/en-us_image_0295099813.png) + +1. Log in to the management console and switch to the related region. + +2. Under *Service List*, choose *Compute* -> *Image ManagementService*. + + The *Image Management Service* page is displayed. + +3. In the upper right corner, click *Create Image*. + + The *Create Image* page is displayed. + +4. Set parameters. + + ![*Figure 2* Creating a data disk + image](/img/docs/best-practices/computing/image-management-service/en-us_image_0000001251619009.png) + + - **Type**: Select *Data disk image*. + - **Source**: Select *ECS* and then select the `data disk + ecs-disk-image-test-volume` data disk. + - **Name**: Enter a name for the data disk image, for example, `disk-image-test`. + - **Enterprise Project**: Select `default`. + +5. Click *Next*. + +6. Confirm the settings, read and agree to the agreement, and click + *Submit*. + +7. The system redirects to the private image list. Wait for several + minutes and check whether the data disk image is successfully + created. + + ![*Figure 3* Viewing private + images](/img/docs/best-practices/computing/image-management-service/en-us_image_0295100003.png) + +## Sharing the Image with the Target Account + +Share the data disk image created in [Step 1](#creating-a-data-disk-image) with the target account. Before the image sharing, obtain +the project ID of the target account. (You can obtain the project ID from *My Credentials*: + +![*Figure 4* Viewing the project +ID](/img/docs/best-practices/computing/image-management-service/en-us_image_0000001138989308.png) + +1. Locate the row that contains the `disk-image-test` private image. + Choose *More* -> *Share* in the *Operation* column. + + The *Share Image* dialog box is displayed. + +2. In the *Share Image* dialog box, enter the project ID of the + target account. + +3. Click *OK*. + +## Accepting the Shared Image + +Accept the shared data disk image. + +1. Log in to the management console using the account the image is + shared with and switch to the related region. + +2. Under *Service List*, choose *Compute* -> *Image Management + Service*. Then, click the *Images Shared with Me* tab. + +3. Select *disk-image-test* and click *Accept*. + + ![*Figure 5* Accepting a shared + image](/img/docs/best-practices/computing/image-management-service/en-us_image_0000001251966577.png) + + After the image is accepted, it is displayed in the shared image + list. + +## Creating a Data Disk or an ECS + +Use the shared image to create a new data disk and attach it to an +existing ECS. Alternatively, create an ECS with a data disk created from +the shared image. Then, check whether the service data is successfully +migrated. + +- Create a new data disk and attach it to an existing ECS. + 1. Locate the row that contains the shared image + `disk-image-test`, and click *Create Data Disk* in the + *Operation* column. + + ![*Figure 6* Creating a data + disk](/img/docs/best-practices/computing/image-management-service/en-us_image_0295117864.png) + + The page for purchasing EVS disks is displayed. + + 2. Configure the billing mode and disk specifications as needed. + **The AZ must be the same as that of the ECS to which the data + disk will be attached**. Click *Next*. + + 3. Return to the EVS disk list. **Wait** for several minutes until the + EVS disk is created successfully. + + 4. Locate the row that contains the new EVS disk and click + *Attach* in the *Operation* column to attach the data disk + to the ECS. + + 5. Log in to the ECS and check whether the service data is + successfully migrated. + + Run the `fdisk -l` command. The command output shows that the + new data disk has been partitioned. + + ![image2](/img/docs/best-practices/computing/image-management-service/en-us_image_0295125718.png) + + Mount the new partition to a directory of the ECS and check the + *hello-world.txt* file. The file content is properly printed, + which means that the service data migration is successful. + + ![image3](/img/docs/best-practices/computing/image-management-service/en-us_image_0295125796.png) +- Create an ECS with a data disk attached. + 1. Under *Service List*, choose *Compute* -> *Elastic Cloud + Server*. + + 2. In the upper right corner, click *Create ECS*. + + The page for purchasing ECSs is displayed. + + 3. Configure the billing mode, AZ, and specifications as needed and + add a data disk which will be created from the shared data disk + image. Complete the ECS creation as instructed. + + ![*Figure 7* Adding a data + disk](/img/docs/best-practices/computing/image-management-service/en-us_image_0295128562.png) + + 4. Wait for several minutes and check whether the new ECS is + displayed in the ECS list. + + 5. Log in to the new ECS and check whether the service data is + successfully migrated. + + Run the `fdisk -l` command. The command output shows that the + new data disk has been partitioned. Mount the new partition to a + directory of the ECS and check the *hello-world.txt* file. The + file content is properly printed, which means that the service + data migration is successful. + + ![image4](/img/docs/best-practices/computing/image-management-service/en-us_image_0295129442.png) \ No newline at end of file diff --git a/docs/best-practices/containers/cloud-container-engine/migrating-from-other_clouds-to-cce.md b/docs/best-practices/containers/cloud-container-engine/migrating-from-other_clouds-to-cce.md index 09b641f66..80c343909 100644 --- a/docs/best-practices/containers/cloud-container-engine/migrating-from-other_clouds-to-cce.md +++ b/docs/best-practices/containers/cloud-container-engine/migrating-from-other_clouds-to-cce.md @@ -11,11 +11,8 @@ Assume that you have deployed the WordPress on 3rd party cloud provider and crea ## Solution Design ![image1](/img/docs/best-practices/containers/cloud-container-engine/en-us_image_0000001402114285.png) - - ![image1](/img/docs/best-practices/containers/cloud-container-engine/en-us_image_0264642164.png) - ## Migrating Data ### Migrating Databases and Storage @@ -24,10 +21,10 @@ Assume that you have deployed the WordPress on 3rd party cloud provider and crea ### Migrating Container Images -1. Export the container images used in the other clusters: Pull the images to the client by referring to the operation guide of +1. Export the container images used in the other clusters: Pull the images to the client by referring to the operation guide of other Cloud Container Registry. -2. Upload the image files to Open Telekom Cloud SWR: Run the `docker pull` command to push the image to Open Telekom +2. Upload the image files to Open Telekom Cloud SWR: Run the `docker pull` command to push the image to Open Telekom Cloud. For details, see [Uploading an Image Through the Client](https://docs.otc.t-systems.com/software-repository-container/umn/image_management/uploading_an_image_through_the_client.html). ## Installing the Migration Tool @@ -51,23 +48,23 @@ CCE supports backup and restore using the **e-backup add-on**, which is compatible with Velero and uses OBS as the storage backend. You can use Velero in on-premises clusters and use e-backup in CCE. -- **Without e-backup**: Install Velero in the source and target and +- **Without e-backup**: Install Velero in the source and target and migrate resources by referring to [Migrating Resources in a Cluster (Velero)](#migrating-resources-in-a-cluster) -- **With e-backup**: Install Velero in the source cluster and use OBS as +- **With e-backup**: Install Velero in the source cluster and use OBS as the storage backend by following the instructions described in [Installing Velero](#installing-velero), and install e-backup in the target CCE cluster and migrate resources by referring to [Migrating Resources in a Cluster (Velero)](#migrating-resources-in-a-cluster) ### Prerequisites -- The Kubernetes version of the source on-premises cluster must be +- The Kubernetes version of the source on-premises cluster must be 1.10 or later, and the cluster can use DNS and Internet services properly. -- If you use OBS to store backup files, obtain the AK/SK of a user who +- If you use OBS to store backup files, obtain the AK/SK of a user who has the right to operate OBS. For details, see [Obtaining Access Keys (AK/SK)](https://docs.otc.t-systems.com/object-storage-service/api-ref/appendixes/obtaining_access_keys_ak_sk.html). -- If you use MinIO to store backup files, bind an EIP to the server +- If you use MinIO to store backup files, bind an EIP to the server where MinIO is installed and enable the API and console port of MinIO in the security group. -- The target CCE cluster has been created. -- The source cluster and target cluster must each have at least one +- The target CCE cluster has been created. +- The source cluster and target cluster must each have at least one idle node. It is recommended that the node specifications be 4 vCPUs and 8 GiB memory or higher. @@ -81,20 +78,20 @@ files, skip this section and go to [Installing Velero](#installing-velero) MinIO can be installed in any of the following locations: -- Temporary ECS outside a cluster. If the MinIO server is installed outside the cluster, backup files +- Temporary ECS outside a cluster. If the MinIO server is installed outside the cluster, backup files will not be affected when a catastrophic fault occurs in the cluster. -- Idle nodes in a cluster. You can remotely log in to a node to install the MinIO server or +- Idle nodes in a cluster. You can remotely log in to a node to install the MinIO server or containerize MinIO. For details, see the [official Velero documentation](https://velero.io/docs/v1.7/contributions/minio/#set-up-server). :::important For example, to containerize MinIO, do as follows: -- Change the storage type (`empty dir`) in the YAML file +- Change the storage type (`empty dir`) in the YAML file provided by Velero to `HostPath` or `Local`. Otherwise, backup files will be permanently lost after the pod are restarted. -- Change the Service type to `NodePort` or use other types of +- Change the Service type to `NodePort` or use other types of Services that support public network access to ensure that MinIO can be accessed by external networks. Otherwise, backup files cannot be downloaded outside the cluster. @@ -108,7 +105,7 @@ group. Otherwise, backup files cannot be uploaded or downloaded. In this example, MinIO is installed on a temporary ECS outside the cluster. -1. Download MinIO. +1. Download MinIO. ```bash mkdir /opt/minio @@ -118,7 +115,7 @@ cluster. chmod +x minio ``` -2. Configure the username and password of MinIO. +2. Configure the username and password of MinIO. The username and password set using this method are temporary environment variables and must be reset after the service is @@ -130,7 +127,7 @@ cluster. export MINIO_ROOT_PASSWORD=minio123 ``` -3. Create a service. In the command, `/opt/miniodata/` indicates the +3. Create a service. In the command, `/opt/miniodata/` indicates the local disk path for MinIO to store data. The default API port of MinIO is `9000`, and the console port is @@ -147,9 +144,9 @@ cluster. the object bucket will fail. ::: -4. Use a browser to access `http://:30840`. The MinIO console page is displayed. +4. Use a browser to access `http://:30840`. The MinIO console page is displayed. -### Installing Velero +### Installing Velero Go to the OBS or MinIO console and create a bucket named `velero` to store backup files. You can custom the bucket name, which must be used @@ -157,35 +154,35 @@ when installing Velero. Otherwise, the bucket cannot be accessed and the backup fails. :::important -- Velero instances need to be installed and deployed in both the +- Velero instances need to be installed and deployed in both the **source and target clusters**. The installation procedures are the same, which are used for backup and restoration, respectively. -- The master node of a CCE cluster does not provide a port for remote +- The master node of a CCE cluster does not provide a port for remote login. You can install Velero using `kubectl`. -- If there are a large number of resources to back up, you are advised +- If there are a large number of resources to back up, you are advised to adjust the CPU and memory resources of Velero and Restic to 1 vCPU and 1 GiB memory or higher. -- The object storage bucket for storing backup files must be +- The object storage bucket for storing backup files must be **empty**. ::: Download the latest, [stable binary file](https://github.com/vmware-tanzu/velero/releases). This article uses Velero `1.7.0` as an example. **The installation process in the source cluster is the same as that in the target cluster**: -1. Download the binary file of Velero 1.7.0. +1. Download the binary file of Velero 1.7.0. ```bash wget https://github.com/vmware-tanzu/velero/releases/download/v1.7.0/velero-v1.7.0-linux-amd64.tar.gz ``` -2. Install the Velero client. +2. Install the Velero client. ```bash tar -xvf velero-v1.7.0-linux-amd64.tar.gz cp ./velero-v1.7.0-linux-amd64/velero /usr/local/bin ``` -3. Create the access key file **credentials-velero** for the backup +3. Create the access key file **credentials-velero** for the backup object storage. ```bash @@ -205,7 +202,7 @@ Velero `1.7.0` as an example. **The installation process in the source cluster i aws_secret_access_key={SK} ``` -4. Deploy the Velero server. Change the value of `\--bucket` to the +4. Deploy the Velero server. Change the value of `\--bucket` to the name of the created object storage bucket. In this example, the bucket name is `velero`. For more information about custom installation parameters, see [Customize Velero Install](https://velero.io/docs/v1.7/customize-installation/). @@ -221,79 +218,22 @@ Velero `1.7.0` as an example. **The installation process in the source cluster i --backup-location-config region=eu-de,s3ForcePathStyle="true",s3Url=http://obs.eu-de.otc.t-systems.com ``` - - -1. By default, a namespace named `velero` is created for the Velero + | Parameter | Description | + | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | + | --provider | Vendor who provides the plug-in. | + | --plugins | API component compatible with AWS S3. Both OBS and MinIO support the S3 protocol. | + | --bucket | Name of the object storage bucket for storing backup files. The bucket must be created in advance. | + | --secret-file | Secret file for accessing the object storage, that is, the **credentials-velero** file created before | + | --use-restic | Whether to use **Restic** to support PV data backup. You are advised to enable this function. Otherwise, storage volume resources cannot be backed up. | + | --use-volume-snapshots | Whether to create the VolumeSnapshotLocation object for PV snapshot, which requires support from the snapshot program. Set this parameter to false. | + | --backup-location-config | OBS bucket configurations, including `region`, `s3ForcePathStyle`, and `s3Url`. | + | region | Region to which object storage bucket belongs. If OBS is used, set this parameter according to your region, for example, `eu-nl`. If MinIO is used, set this parameter to minio. | + | s3ForcePathStyle | The value true indicates that the S3 file path format is used. | + | s3Url | API access address of the object storage bucket. If OBS is used, set this parameter to `http://obs.{region}.otc.t-systems.com` (region indicates the region where the object storage bucket is located). For example, if the region is Germany (`eu-de`), the value is `http://obs.eu-de.otc.t-systems.com`. If MinIO is used, set this parameter to `http://{EIP of the node where minio is located}:9000`. The value of this parameter is determined based on the IP address and port of the node where MinIO is installed. **Note**: The access port in s3Url must be set to the API port of MinIO instead of the console port. The default API port of MinIO is `9000`. To access MinIO installed outside the cluster, enter the public IP address of MinIO. | + + **Table 1** Installation parameters of Velero + +5. By default, a namespace named `velero` is created for the Velero instance. Run the following command to view the pod status: ```shell @@ -309,7 +249,7 @@ Velero `1.7.0` as an example. **The installation process in the source cluster i Restic and Velero. ::: -2. Check the interconnection between Velero and the object storage and +6. Check the interconnection between Velero and the object storage and ensure that the status is `Available`. ```shell @@ -337,18 +277,18 @@ migration based on the migration result. ### Prerequisites -- Before the migration, clear the abnormal pod resources in the source +- Before the migration, clear the abnormal pod resources in the source cluster. If the pod is in the abnormal state and has a PVC mounted, the PVC is in the pending state after the cluster is migrated. -- Ensure that the cluster on the CCE side does not have the same +- Ensure that the cluster on the CCE side does not have the same resources as the cluster to be migrated because Velero does not restore the same resources by default. -- To ensure that container images can be properly pulled after cluster +- To ensure that container images can be properly pulled after cluster migration, migrate the images to SWR. -- CCE **does not** support EVS disks of the `ReadWriteMany` type. If +- CCE **does not** support EVS disks of the `ReadWriteMany` type. If resources of this type exist in the source cluster, change the storage type to `ReadWriteOnce`. -- Velero integrates the Restic tool to back up and restore storage +- Velero integrates the Restic tool to back up and restore storage volumes. Currently, the storage volumes of the HostPath type are not supported. For details, see [Restic Restrictions](https://velero.io/docs/v1.7/restic/#limitations). To back up storage volumes of this type, replace the `hostPath` volumes @@ -363,7 +303,7 @@ will not cause a backup failure. ### Backing Up an Application in the Source Cluster -1. (Optional) To back up the data of a specified storage volume in the +1. (Optional) To back up the data of a specified storage volume in the pod, add an annotation to the pod. The annotation template is as follows: @@ -371,9 +311,9 @@ will not cause a backup failure. kubectl -n annotate backup.velero.io/backup-volumes=,,... ``` - - `\`: namespace where the pod is located. - - `\`: pod name. - - `\`: name of the persistent volume mounted to + - `\`: namespace where the pod is located. + - `\`: pod name. + - `\`: name of the persistent volume mounted to the pod. You can run the `describe` statement to query the pod information. The `Volume` field indicates the names of all persistent volumes attached to the pod. @@ -388,12 +328,12 @@ will not cause a backup failure. kubectl annotate pod/mysql-5ffdfbc498-c45lh backup.velero.io/backup-volumes=mysql-storage ``` -2. Back up the application. During the backup, you can specify +2. Back up the application. During the backup, you can specify resources based on parameters. If no parameter is added, the entire cluster resources are backed up by default. For details about the parameters, see [Resource filtering](https://velero.io/docs/v1.7/resource-filtering/). - - `\--default-volumes-to-restic`: indicates that Restic is used + - `\--default-volumes-to-restic`: indicates that Restic is used to back up all storage volumes mounted to a pod. `HostPath` volumes are not supported. If this parameter is left blank, the storage volume specified by annotation in @@ -403,17 +343,17 @@ will not cause a backup failure. ```bash velero backup create --default-volumes-to-restic ``` - - `\--include-namespaces`: backs up resources in a specified namespace. + - `\--include-namespaces`: backs up resources in a specified namespace. ```bash velero backup create --include-namespaces ``` - - `\--include-resources`: backs up the specified resources. + - `\--include-resources`: backs up the specified resources. ```bash velero backup create --include-resources deployments ``` - - `\--selector`: backs up resources that match the selector. + - `\--selector`: backs up resources that match the selector. ```bash velero backup create --selector = @@ -433,7 +373,7 @@ will not cause a backup failure. Backup request "wordpress-backup" submitted successfully. Run `velero backup describe wordpress-backup` or `velero backup logs wordpress-backup` for more details. ``` -3. Check the backup status. +3. Check the backup status. ```bash velero backup get @@ -462,7 +402,7 @@ storage interfaces between the two clusters when creating a workload and request storage resources of the corresponding type. For details, see [Updating the Storage Class](./updating-resources#updating-the-storage-class) -1. Create a `ConfigMap` in the CCE cluster and map the storage class used +1. Create a `ConfigMap` in the CCE cluster and map the storage class used by the source cluster to the default storage class of the CCE cluster. @@ -471,10 +411,10 @@ request storage resources of the corresponding type. For details, see `csi-disk`. :::note - - When an application containing PV data is restored in a CCE + - When an application containing PV data is restored in a CCE cluster, the defined storage class dynamically creates and mounts storage resources (such as EVS volumes) based on the PVC. - - The storage resources of the cluster can be changed as required, + - The storage resources of the cluster can be changed as required, not limited to EVS volumes. To mount other types of storage, such as file storage and object storage, see [Updating the Storage Class](./updating-resources#updating-the-storage-class) @@ -496,7 +436,7 @@ request storage resources of the corresponding type. For details, see default:csi-disk ``` -2. Use the Velero tool to create a restore and specify a backup named +2. Use the Velero tool to create a restore and specify a backup named `wordpress-backup` to restore the WordPress application to the CCE cluster. @@ -507,7 +447,7 @@ request storage resources of the corresponding type. For details, see You can run the `velero restore get` statement to view the application restoration status. -3. After the restoration is complete, check whether the application is +3. After the restoration is complete, check whether the application is running properly. If other adaptation problems may occur, rectify the fault by following the procedure described in [Updating Resources Accordingly](./updating-resources). @@ -518,7 +458,7 @@ request storage resources of the corresponding type. For details, see Prepare the object storage and save its AK/SK. -1. Install the MinIO. +1. Install the MinIO. ```bash # Binary installation @@ -540,7 +480,7 @@ Prepare the object storage and save its AK/SK. kubectl apply -f ./velero-v1.4.0-linux-amd64/examples/minio/00-minio-deployment.yaml ``` -2. Create a bucket, which will be used in the migration. +2. Create a bucket, which will be used in the migration. Open the web page of the MinIO service. Use `MINIO_ACCESS_KEY`/`MINIO_SECRET_KEY` to log in to the MinIO service. In this example, use `minio`/`minio123`. Click *Create bucket*. In this example, create a bucket named `velero`. @@ -548,13 +488,13 @@ Prepare the object storage and save its AK/SK. Perform the following operations on other cluster and CCE nodes that can run `kubectl` commands: -1. Download the latest stable version of the migration tool from https://github.com/heptio/velero/releases. +1. Download the latest stable version of the migration tool from [here](https://github.com/heptio/velero/releases). :::note This article uses **velero-v1.4.0-linux-amd64.tar.gz** as an example. ::: -2. Install the Velero client. +2. Install the Velero client. ```bash mkdir /opt/ack2cce @@ -563,7 +503,7 @@ Perform the following operations on other cluster and CCE nodes that can run `ku cp /opt/ack2cce/velero-v1.4.0-linux-amd64/velero /usr/local/bin ``` -3. Install the Velero server. +3. Install the Velero server. - Prepare the MinIO authentication file: diff --git a/docs/best-practices/databases/document-database-service/from-ecs-hosted-mongodb-to-dds.md b/docs/best-practices/databases/document-database-service/from-ecs-hosted-mongodb-to-dds.md index 8c3c15f57..bc37fbc05 100644 --- a/docs/best-practices/databases/document-database-service/from-ecs-hosted-mongodb-to-dds.md +++ b/docs/best-practices/databases/document-database-service/from-ecs-hosted-mongodb-to-dds.md @@ -1,5 +1,333 @@ --- id: from-ecs-hosted-mongodb-to-dds -title: From Other Cloud MongoDB to DDS -tags: [dds, migration, mongodb] +title: From ECS-hosted MongoDB to DDS +tags: [dds, migration, mongodb, ecs, drs] --- + + +# From ECS-hosted MongoDB to DDS + +DRS helps you migrate data from MongoDB databases on ECSs to DDS +instances on the current cloud. With DRS, you can migrate databases +online with zero downtime and your services and databases can remain +operational during migration. + +## Solution Design + +This section describes how to use DRS to migrate data from an ECS +database to a DDS instance on the current cloud. The following network +scenarios are supported: + +### Source and destination databases are in the same VPC + +![**Figure 1** Source and destination databases in the same VPC](/img/docs/best-practices/databases/document-database-service/en-us_image_0295762707.png) + +### Source and destination databases are in different VPCs + +![**Figure 2** Source and destination databases in the same region and different VPCs](/img/docs/best-practices/databases/document-database-service/en-us_image_0180865321.png) + +### Procedure + +![**Figure 3** Flowchart](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213229532.png) + +#### Migration Suggestions + +:::important + +- Database migration is closely impacted by a wide range of + environmental and operational factors. To ensure the migration goes + smoothly, perform a test run before the actual migration to help you + detect and resolve any potential issues in advance. Recommendations + on how to minimize any potential impacts on your data base are + provided in this section. +- It is **strongly recommended** that you start your migration task during + off-peak hours. A less active database is easier to migrate + successfully. If the data is fairly static, there is less likely to + be any severe performance impacts during the migration. + +::: + +#### Notes on Migration + +:::important +For details, see +[precautions](https://docs.otc.t-systems.com/data-replication-service/umn/real-time_migration/to_the_cloud/index.html#drs-online-migration) +on using specific migration tasks in *Data Replication Service Real-Time +Migration*. +::: + +## Prerequisites + +1. Permissions: + + **Table 1** below, lists the permissions required for the source and destination + databases when migrating data from a MongoDB database on an ECS to + DDS on the current cloud. + + | Database | Full Migration Permission | Full+Incremental Migration Permission | + | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | + | Source | **Replica set**: The source database user must have the read permission for the database to be migrated.
**Single node**: The source database user must have the read permission for the database to be migrated.
**Cluster**: The source database user must have the read permission for the databases to be migrated and the config database. To migrate accounts and roles of the source database, the source database user must have the read permission for the `system.users` and `system.roles` system tables of the admin database. | **Replica set**: The source database user must have the read permission for the databases to be migrated and the local database.
**Single node**: The source database user must have the `read` permission for the databases to be migrated and the local database.
**Cluster**: The source mongos node user must have the `readAnyDatabase` permission for the databases to be migrated and the config database. The source shard node user must have the `readAnyDatabase` permission for the **admin** database and the `read` permission for the **local** database. To migrate accounts and roles of the source database, the source database user must have the read permission for the `system.users` and `system.roles` system tables of the **admin** database. | + | Destination | The destination database user must have the `dbAdminAnyDatabase` permission for the **admin** database and the `readWrite` permission for the **destination** database. If the destination database is a cluster instance, the migration account must have the read permission for the config database. | | + + **Table 1** Migration permissions + + - Source database permissions: + + The source MongoDB database user must have all the required + permissions listed in the table above. + If the permissions are insufficient, create a user that has all + of the permissions on the source database. + + - Destination database permissions: + + The initial account of the DDS instance has the required + permissions. + +2. Network settings + + - The source database and destination DDS DB instance must be in + the same region. + - The source database and destination DDS DB instance can be + either in the same VPC or different VPCs. + - If the source and destination databases are in different + VPCs, the subnets of the source and destination databases + are required to be in different CIDR blocks. You need to + create a VPC peering connection between the two VPCs. + + For details, see [VPC Peering Connection + Overview](https://docs.otc.t-systems.com/virtual-private-cloud/umn/vpc_peering_connection/vpc_peering_connection_overview.html) + in the *Virtual Private Cloud User Guide*. + + - If the source and destination databases are in the same VPC, + the networks are interconnected by default. + +3. Security rules + + - In the same VPC, the network is connected by default. You do not + need to set a security group. + - In different VPCs, establish a VPC peering connection between + the two VPCs. You do not need to set a security group. + +4. Other + + You need to export the user information of the MongoDB database + first and manually add it to the destination DDS DB instance because + the user information will not be migrated. + +## Migrating the Database + +1. Create a migration task. + + 1. Log in to the management console and choose *Databases* -> + *Data Replication Service* to go to the *DRS console*. + + 2. On the *Online Migration Management* page, click *Create + Migration Task*. + + 3. On the *Create Replication Instance* page, configure the task + details, recipient, and replication instance and click *Next*. + + ![**Figure 4** Replication instance information](/img/docs/best-practices/databases/document-database-service/en-us_image_0232589882.png) + + | Parameter | Description | + | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Project | The project corresponds to the current region and can be changed. | + | Task Name | The task name consists of 4 to 50 characters, starts with a letter, and can contain only letters (case-insensitive), digits, hyphens (-), and underscores (\_). | + | Description | The description consists of a maximum of 256 characters and cannot contain the following special characters: =\<\>&'\\" | + + **Table 2** Task settings + + | Parameter | Description | + | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Data Flow | To the cloud | + | Source DB Engine | Select MongoDB. | + | Destination DB Engine | Select DDS. | + | Network Type | Select VPC. | + | Destination DB Instance | The DDS DB instance you purchased. | + | Migration Type | Select Full+Incremental as an example:
**Full**: This migration type is suitable for scenarios where a service interruption is acceptable. All objects and data in non-system databases are migrated to the destination database at one time. The objects include tables, views, and stored procedures.
If you perform a full migration, you are advised to stop operations on the source database. Otherwise, data generated in the source database during the migration will not be synchronized to the destination database.
**Full+Incremental**: This migration type allows you to migrate data without interrupting services. After a full migration initializes the destination database, an incremental migration initiates and parses logs to ensure data consistency between the source and destination databases. Note If you select the Full+Incremental migration type, data generated during the full migration will be synchronized to the destination database with zero downtime, ensuring that both the source and destination databases remain accessible. | + | Source DB Instance Type | If you select Full+Incremental for Migration Type, set this parameter based on the source database. Non-cluster is selected as an example. If the source database is a cluster instance, set this parameter to Cluster. If the source database is a replica set or a single node instance, set this parameter to Non-cluster. | + | Obtain Incremental Data | This parameter is available for configuration if Source DB Instance Type is set to Cluster. You can determine how to capture data changes during the incremental synchronization.
- **oplog**: For MongoDB 3.2 or later, DRS directly connects to each shard of the source DB instance to extract data. If you select this mode, you must disable the balancer of the source instance. When testing the connection, you need to enter the connection information of each shard node of the source instance.
- **changeStream**: This method is recommended. For MongoDB 4.0 and later, DRS connects to mongos nodes of the source instance to extract data. If you select this method, you must enable the WiredTiger storage engine of the source instance. Only whitelisted users can use changeStream. To use this function, submit a service ticket. In the upper right corner of the management console, choose *Service Tickets* -> *Create Service Ticket* to submit a service ticket. | + | Source Shard Quantity | If Source DB Instance Type is set to Cluster and Obtain Incremental Data is set to oplog, enter the number of source shard nodes. The default minimum number of source DB instances is 2 and the maximum number is 32. You can set this parameter based on the number of source database shards. | + + **Table 3** Replication instance information + + 4. On the *Configure Source and Destination Databases* page, wait + until the replication instance is created. Then, specify source + and destination database information and click *Test + Connection* for both the source and destination databases to + check whether they have been connected to the replication + instance. After the connection tests are successful, select the + check box before the agreement and click *Next*. + + ![**Figure 5** Source and destination database details](/img/docs/best-practices/databases/document-database-service/en-us_image_0232605869.png) + + | Parameter | Description | + | ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Source Database Type | Select Self-built on ECS. | + | VPC | A dedicated virtual network in which the source database is located. It isolates networks for different services. You can select an existing VPC or create a VPC. For details on how to create a VPC, see [Creating a VPC](https://docs.otc.t-systems.com/virtual-private-cloud/umn/vpc_and_subnet/vpc/creating_a_vpc.html). | + | Subnet | A subnet provides dedicated network resources that are logically isolated from other networks, improving network security. The subnet must be in the AZ where the source database resides. You need to enable DHCP for creating the source database subnet. For details on how to create a VPC, see the [Creating a VPC](https://docs.otc.t-systems.com/virtual-private-cloud/umn/vpc_and_subnet/vpc/creating_a_vpc.html) section in the Virtual Private Cloud User Guide. | + | IP Address or Domain Name | The IP address or domain name of the source database. | + | Port | The port of the source database. Range: 1 - 65535 | + | Database Username | A username for the source database. | + | Database Password | The password for the database username. | + | SSL Connection | To improve data security during the migration, you are advised to enable SSL to encrypt migration links and upload a CA certificate. | + + **Table 4** Source database information + + | Parameter | Description | + | ----------------- | ----------------------------------------------------------------------------------------------------------------------- | + | DB Instance Name | The DDS DB instance you have selected during the migration task creation is displayed by default and cannot be changed. | + | Database Username | The username for accessing the destination DDS DB instance. | + | Database Password | The password for the database username. | + + **Table 5** Destination database information + + 5. On the *Set Task* page, select migration objects and click + *Next*. + + ![**Figure 6** Migration object](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097583.png) + + | Parameter | Description | + | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | + | Migrate Account | There are accounts that can be migrated completely and accounts that cannot be migrated. You can choose whether to migrate the accounts. Accounts that cannot be migrated or accounts that are not selected will not exist in the destination database. Ensure that your services will not be affected by these accounts. Yes If you choose to migrate accounts, see [Migrating Accounts](https://docs.otc.t-systems.com/data-replication-service/umn/real-time_migration/task_management/managing_objects/migrating_accounts.html) in Data Replication Service User Guide to migrate database users and roles. No During the migration, accounts and roles are not migrated. | + | Migrate Object | You can choose to migrate all objects, tables, or databases based on your service requirements. All: All objects in the source database are migrated to the destination database. After the migration, the object names will remain the same as those in the source database and cannot be modified. Tables: The selected table-level objects will be migrated. Databases: The selected database-level objects will be migrated. If the source database is changed, click in the upper right corner before selecting migration objects to ensure that the objects to be selected are from the changed source database. Note If you choose not to migrate all of the databases, the migration may fail because the objects, such as stored procedures and views, in the database to be migrated may have dependencies on other objects that are not migrated. To ensure a successful migration, you are advised to migrate all of the databases. When you select an object, the spaces before and after the object name are not displayed. If there are two or more consecutive spaces in the middle of the object name, only one space is displayed. The search function can help you quickly select the required database objects. | + + **Table 6** Migration object + + 6. On the *Check Task* page, check the migration task. + + - If any check fails, review the cause and rectify the fault. + After the fault is rectified, click *Check Again*. + + :::note + For details about how to handle check failures, see + [Checking Whether the Source Database Is + Connected](https://docs.otc.t-systems.com/data-replication-service/umn/troubleshooting/solutions_to_failed_check_items/networks/checking_whether_the_source_database_is_connected.html) + in *Data Replication Service User Guide*. + ::: + + - If all check items are successful, click *Next*. + + ![**Figure 7** Task Check](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001152137438.png) + + You can proceed to the next step only when all check items are + successful. If any alarms are generated, view and confirm the + alarm details first before proceeding to the next step. + + 7. On the displayed page, specify *Start Time* and confirm that + the configured information is correct and click *Submit* to + submit the task. + + ![**Figure 8** Task startup settings](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001199158158.png) + + | Parameter | Description | + | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Start Time | Set Start Time to Start upon task creation or Start at a specified time based on site requirements. The Start at a specified time option is recommended. Note The migration task may affect the performance of the source and destination databases. You are advised to start the task in off-peak hours and reserve two to three days for data verification. | + + **Table 7** Task startup settings + + 8. After the task is submitted, go back to the *Online Migration + Management* page to view the task status. + +2. Manage the migration task. + + The migration task contains two phases: full migration and + incremental migration. You can manage them in different phases. + + - **Full migration** + - Viewing the migration progress: Click the target full + migration task, and on the *Migration Progress* tab, you + can see the migration progress of the structure, data, + indexes, and migration objects. When the progress reaches + 100%, the migration is complete. + - Viewing migration details: In the migration details, you can + view the migration progress of a specific object. If the + number of objects is the same as that of migrated objects, + the migration is complete. You can view the migration + progress of each object in detail. Currently, this function + is available only to whitelisted users. You can submit a + service ticket to apply for this function. + - **Incremental Migration** Permission + - Viewing the synchronization delay: After the full migration + is complete, an incremental migration starts. On the + *Online Migration Management* page, click the target + migration task. On the displayed page, click *Migration + Progress* to view the synchronization delay of the + incremental migration. If the synchronization delay is 0s, + the destination database is being synchronized with the + source database in real time. You can also view the data + consistency on the *Migration Comparison* tab. + + ![**Figure 9** Viewing the synchronization delay](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001243756137.png) + + - Viewing the migration results: On the *Online Migration + Management* page, click the target migration task. On the + displayed page, click *Migration Comparison* and perform a + migration comparison in accordance with the comparison + process, which should help you determine an appropriate time + for migration to minimize service downtime. + + ![**Figure 10** Database comparison + process](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213070166.png) + + For details, see [Comparing Migration + Items](https://docs.otc.t-systems.com/data-replication-service/umn/real-time_migration/task_management/step_4_compare_migration_items.html) + in *Data Replication Service User Guide*. + +3. Cut over services. + + You are advised to start the cutover process during off-peak hours. + At least one complete data comparison is performed during off-peak + hours. To obtain accurate comparison results, start data comparison + at a specified time point during off-peak hours. If it is needed, + select *Start at a specified time* for *Comparison Time*. Due to + slight time difference and continuous operations on data, + inconsistent comparison results may be generated, reducing the + reliability and validity of the results. + + 1. Interrupt services first. If the workload is not heavy, you may + not need to interrupt the services. + + 2. Run the following statement on the source database and check + whether any new sessions execute SQL statements within the next + 1 to 5 minutes. If there are no new statements executed, the + service has been stopped. + + ``` + db.currentOp() + ``` + + :::note + The process list queried by the preceding statement includes the + connection of the DRS replication instance. If no additional + session executes SQL statements, the service has been stopped. + ::: + + 3. On the *Migration Progress* page, view the synchronization + delay. When the delay is displayed as 0s and remains stable for + a period, then you can perform a data-level comparison between + the source and destination databases. For details about the time + required, refer to the results of the previous comparison. + + - If there is enough time, compare all objects. + - If there is not enough time, use the data-level comparison + to compare the tables that are frequently used and that + contain key business data or inconsistent data. + + 4. Determine an appropriate time to cut the services over to the + destination database. After services are restored and available, + the migration is complete. + +4. Stop or delete the migration task. + + 1. Stopping the migration task. After databases and services are + migrated to the destination database, to prevent operations on + the source database from being synchronized to the destination + database to overwrite data, you can stop the migration task. + This operation only deletes the replication instance, and the + migration task is still displayed in the task list. You can view + or delete the task. DRS will not charge for this task after you + stop it. + 2. Delete the migration task. After the migration task is complete, + you can delete it. After the migration task is deleted, it will + no longer be displayed in the task list. diff --git a/docs/best-practices/databases/document-database-service/from-on-premises-mongodb-to-dds.md b/docs/best-practices/databases/document-database-service/from-on-premises-mongodb-to-dds.md index 639dfb1e1..ec1a997ae 100644 --- a/docs/best-practices/databases/document-database-service/from-on-premises-mongodb-to-dds.md +++ b/docs/best-practices/databases/document-database-service/from-on-premises-mongodb-to-dds.md @@ -1,5 +1,366 @@ --- id: from-on-premises-mongodb-to-dds -title: From Other Cloud MongoDB to DDS +title: From On-Premises MongoDB to DDS tags: [dds, migration, mongodb] --- + +# From On-Premises MongoDB to DDS + +DRS helps you migrate data from on-premises MongoDB databases to DDS on +the current cloud. With DRS, you can migrate databases online with zero +downtime and your services and databases can remain operational during +migration. + +## Solution Design + +This section describes how to use DRS to migrate an on-premises MongoDB +database to DDS on the current cloud. The following network types are +supported: + +### VPN + +![**Figure 1** VPN](/img/docs/best-practices/databases/document-database-service/en-us_image_0295762692.png) + +### Public network + +![**Figure 2** Public network+SSL connection](/img/docs/best-practices/databases/document-database-service/en-us_image_0234000688.png) + +### Procedure + +![**Figure 3** Flowchart](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213229532.png) + +:::important + +- Database migration is closely impacted by a wide range of + environmental and operational factors. To ensure the migration goes + smoothly, perform a test run before the actual migration to help you + detect and resolve any potential issues in advance. Recommendations + on how to minimize any potential impacts on your data base are + provided in this section. +- It is strongly recommended that you start your migration task during + off-peak hours. A less active database is easier to migrate + successfully. If the data is fairly static, there is less likely to + be any severe performance impacts during the migration. + +::: + +:::caution +Before creating a migration task, read the migration notes carefully. +::: + +For details, see +[precautions](https://docs.otc.t-systems.com/data-replication-service/umn/real-time_migration/to_the_cloud/index.html) +on using specific migration tasks in *Data Replication Service Real-Time +Migration*. + +## Prerequisites + +1. Permissions + + | Database | Full Migration Permission | Full+Incremental Migration Permission | + | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | + | Source | **Replica set**: The source database user must have the read permission for the database to be migrated.
**Single node**: The source database user must have the read permission for the database to be migrated.
**Cluster**: The source database user must have the read permission for the databases to be migrated and the config database. To migrate accounts and roles of the source database, the source database user must have the read permission for the `system.users` and `system.roles` system tables of the admin database. | **Replica set**: The source database user must have the read permission for the databases to be migrated and the local database.
**Single node**: The source database user must have the `read` permission for the databases to be migrated and the local database.
**Cluster**: The source mongos node user must have the `readAnyDatabase` permission for the databases to be migrated and the config database. The source shard node user must have the `readAnyDatabase` permission for the **admin** database and the `read` permission for the **local** database. To migrate accounts and roles of the source database, the source database user must have the read permission for the `system.users` and `system.roles` system tables of the **admin** database. | + | Destination | The destination database user must have the `dbAdminAnyDatabase` permission for the **admin** database and the `readWrite` permission for the **destination** database. If the destination database is a cluster instance, the migration account must have the read permission for the config database. | | + + **Table 1** Migration permissions + + - Source database permissions: + + The source database user must have all the required permissions + listed in the table above. If the permissions are insufficient, create a user + that has all of the permissions on the source database. + + - Destination database permissions: + + If the destination database is a DDS database, the initial + account can be used. + +2. Network settings + + - Source database network settings: + + You can migrate on-premises MongoDB databases to DDS through a + VPN or public network. Enable public accessibility or establish + a VPN for local MongoDB databases based on the site + requirements. You are advised to migrate data through a public + network, which is more convenient and cost-effective. + + - Destination database network settings: + + - If the source database accesses the destination database + through a VPN, enable the VPN service first so that the + source database can communicate with the destination DDS + network. + - If you access the DDS DB instance through a public network, + no network settings are required. + +3. Security rules + + a. Source database network settings: + + - The replication instance needs to be able to access the + source DB. That means that the EIP of the replication + instance must be on the whitelist of the source MongoDB + instance. Before configuring the network whitelist for the + source database, you need to obtain the EIP of the DRS + replication instance. + + After creating a replication instance on the DRS console, + you can find the EIP on the *Configure Source and + Destination Databases* page as shown below: + + ![**Figure 4** EIP of the replication instance](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001244078029.png) + + You can also add `0.0.0.0/0` to the source database whitelist to + allow any IP address to access the source database but this + action may result in security risks. + + - If the migration is performed over a VPN network, add the + private IP address of the DRS replication instance to the + whitelist of the source database to enable the source + database to communicate with the destination database. + + If you do take this step, then once the migration is complete, + you should delete this item from the whitelist or your system + will insecure. + + b. Destination database security group settings: + + By default, the destination database and the DRS replication + instance are in the same VPC and can communicate with each + other. No further configuration is required. + +4. Other + + You need to export the user information of the MongoDB database + first and manually add it to the destination DDS DB instance because + the user information will not be migrated. + +## Migration Procedure + +The following describes how to use DRS to migrate an on-premises MongoDB +database to a DDS DB instance. + +1. Create a migration task. + + a. Log in to the management console and choose *Databases* -> + *Data Replication Service* to go to the DRS console. + + b. On the *Online Migration Management* page, click *Create + Migration Task*. + + c. On the *Create Replication Instance* page, configure the task + details, recipient, and replication instance and click *Next*. + + ![**Figure 5** Replication instance + information](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001493711038.png) + + | Parameter | Description | + | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Region | The region where the replication instance is deployed. You can change the region. To reduce latency and improve access speed, select the region closest to your workloads. | + | Project | The project corresponds to the current region and can be changed. | + | Task Name | The task name consists of 4 to 50 characters, starts with a letter, and can contain only letters (case-insensitive), digits, hyphens (-), and underscores (\_). | + | Description | The description consists of a maximum of 256 characters and cannot contain the following special characters: =\<\>&'\\" | + + **Table 2** Task settings + + | Parameter | Description | + | --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Data Flow | To the cloud | + | Source DB Engine | Select MongoDB. | + | Destination DB Engine | Select DDS. | + | Network Type | Select Public network. | + | Destination DB Instance | The DDS DB instance you purchased. | + | Replication Instance Subnet | The subnet where the replication instance resides. You can also click View Subnet to go to the network console to view the subnet where the instance resides. By default, the DRS instance and the destination DB instance are in the same subnet. You need to select the subnet where the DRS instance resides, and there are available IP addresses for the subnet. To ensure that the replication instance is successfully created, only subnets with DHCP enabled are displayed. | + | Migration Type | Full This migration type is suitable for scenarios where service interruption is acceptable. All objects in non-system databases are migrated to the destination database at one time. The objects include collections and indexes. Full+Incremental The full+incremental migration type allows you to migrate data without interrupting services. After a full migration initializes the destination database, an incremental migration parses logs to ensure data consistency between the source and destination databases. | + | Source DB Instance Type | If you select Full+Incremental for Migration Type, set this parameter based on the source database. If the source database is a cluster instance, set this parameter to Cluster. If the source database is a replica set or a single node instance, set this parameter to Non-cluster. | + | Obtain Incremental Data | This parameter is available for configuration if Source DB Instance Type is set to Cluster. You can determine how to capture data changes during the incremental synchronization. oplog: For MongoDB 3.2 or later, DRS directly connects to each shard of the source DB instance to extract data. If you select this mode, you must disable the balancer of the source instance. When testing the connection, you need to enter the connection information of each shard node of the source instance. changeStream: This method is recommended. For MongoDB 4.0 and later, DRS connects to mongos nodes of the source instance to extract data. If you select this method, you must enable the WiredTiger storage engine of the source instance. Note Only whitelisted users can use changeStream. To use this function, submit a service ticket. In the upper right corner of the management console, choose Service Tickets > Create Service Ticket to submit a service ticket. | + | Source Shard Quantity | If Source DB Instance Type is set to Cluster and Obtain Incremental Data is set to oplog, enter the number of source shard nodes. The default minimum number of source DB instances is 2 and the maximum number is 32. You can set this parameter based on the number of source database shards. | + + **Table 3** Replication instance settings + + d. On the *Configure Source and Destination Databases* page, wait + until the replication instance is created. Then, specify source + and destination database information and click *Test + Connection* for both the source and destination databases to + check whether they have been connected to the replication + instance. After the connection tests are successful, select the + check box before the agreement and click *Next*. + + ![**Figure 6** Source database + information](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001151977634.png) + + | Parameter | Description | + | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | + | mongos Address | IP address or domain name of the source database in the IP address/Domain name:Port format. The port of the source database. Range: 1 - 65534 You can enter a maximum of three groups of IP addresses or domain names of the source database. Separate multiple values with commas (,). For example: 192.168.0.1:8080,192.168.0.2:8080. Ensure that the entered IP addresses or domain names belong to the same sharded cluster. Note If multiple IP addresses or domain names are entered, the test connection is successful as long as one IP address or domain name is accessible. Therefore, you must ensure that the IP address or domain name is correct. | + | Authentication Database | The name of the authentication database. For example: The default authentication database of Open Telekom Cloud DDS instance is admin. | + | mongos Username | A username for the source database. | + | mongos Password | The password for the source database username. | + | SSL Connection | SSL encrypts the connections between the source and destination databases. If SSL is enabled, upload the SSL CA root certificate. | + | Sharded Database | Enter the information about the sharded databases in the source database. | + + **Table 4** Source database settings + + | Parameter | Description | + | ----------------- | ------------------------------------------------------------------------------------ | + | DB Instance Name | The DB instance you selected when creating the migration task and cannot be changed. | + | Database Username | The username for accessing the destination database. | + | Database Password | The password for the database username. | + + **Table 5** Destination database settings + + - Destination database configuration + + ![**Figure 7** Destination database information](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097269.png) + + e. On the *Set Task* page, select migration objects and click + *Next*. + + ![**Figure 8** Migration + object](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097583.png) + + | Parameter | Description | + | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | + | Migrate Account | There are accounts that can be migrated completely and accounts that cannot be migrated. You can choose whether to migrate the accounts. Accounts that cannot be migrated or accounts that are not selected will not exist in the destination database. Ensure that your services will not be affected by these accounts. Yes If you choose to migrate accounts, see [Migrating Accounts](https://docs.otc.t-systems.com/data-replication-service/umn/real-time_migration/task_management/managing_objects/migrating_accounts.html) in Data Replication Service User Guide to migrate database users and roles. No During the migration, accounts and roles are not migrated. | + | Migrate Object | You can choose to migrate all objects, tables, or databases based on your service requirements. All: All objects in the source database are migrated to the destination database. After the migration, the object names will remain the same as those in the source database and cannot be modified. Tables: The selected table-level objects will be migrated. Databases: The selected database-level objects will be migrated. If the source database is changed, click in the upper right corner before selecting migration objects to ensure that the objects to be selected are from the changed source database. Note If you choose not to migrate all of the databases, the migration may fail because the objects, such as stored procedures and views, in the database to be migrated may have dependencies on other objects that are not migrated. To ensure a successful migration, you are advised to migrate all of the databases. When you select an object, the spaces before and after the object name are not displayed. If there are two or more consecutive spaces in the middle of the object name, only one space is displayed. The search function can help you quickly select the required database objects. | + + **Table 6** Migration object + + f. On the *Check Task* page, check the migration task. + + - If any check fails, review the cause and rectify the fault. + After the fault is rectified, click *Check Again*. + + For details about how to handle check failures, see + [Checking Whether the Source Database Is + Connected](https://docs.otc.t-systems.com/data-replication-service/umn/troubleshooting/solutions_to_failed_check_items/networks/checking_whether_the_source_database_is_connected.html) + in *Data Replication Service User Guide*. + + - If all check items are successful, click *Next*. + + ![**Figure 9** Task + Check](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001152137438.png) + + :::note + You can proceed to the next step only when all check items are + successful. If any alarms are generated, view and confirm the + alarm details first before proceeding to the next step. + ::: + + g. On the displayed page, specify *Start Time* and confirm that + the configured information is correct and click *Submit* to + submit the task. + + ![**Figure 10** Task startup + settings](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001199158158.png) + + | Parameter | Description | + | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Start Time | Set Start Time to Start upon task creation or Start at a specified time based on site requirements. The Start at a specified time option is recommended. Note The migration task may affect the performance of the source and destination databases. You are advised to start the task in off-peak hours and reserve two to three days for data verification. | + + h. After the task is submitted, go back to the *Online Migration + Management* page to view the task status. + +2. Manage the migration task. + + The migration task contains two phases: full migration and + incremental migration. You can manage them in different phases. + + - Full migration + - Viewing the migration progress: Click the target full + migration task, and on the *Migration Progress* tab, you + can see the migration progress of the structure, data, + indexes, and migration objects. When the progress reaches + 100%, the migration is complete. + - Viewing migration details: In the migration details, you can + view the migration progress of a specific object. If the + number of objects is the same as that of migrated objects, + the migration is complete. You can view the migration + progress of each object in detail. Currently, this function + is available only to whitelisted users. You can submit a + service ticket to apply for this function. + - Incremental Migration Permission + - Viewing the synchronization delay: After the full migration + is complete, an incremental migration starts. On the + *Online Migration Management* page, click the target + migration task. On the displayed page, click **Migration + Progress** to view the synchronization delay of the + incremental migration. If the synchronization delay is 0s, + the destination database is being synchronized with the + source database in real time. You can also view the data + consistency on the *Migration Comparison* tab. + + ![**Figure 11** Viewing the synchronization + delay](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001243756137.png) + + - Viewing the migration results: On the *Online Migration + Management* page, click the target migration task. On the + displayed page, click *Migration Comparison* and perform a + migration comparison in accordance with the comparison + process, which should help you determine an appropriate time + for migration to minimize service downtime. + + ![**Figure 12** Database comparison + process](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213070166.png) + + For details, see [Comparing Migration + Items](https://docs.otc.t-systems.com/data-replication-service/umn/real-time_migration/task_management/step_4_compare_migration_items.html) + in *Data Replication Service User Guide*. + +3. Cut over services. + + You are advised to start the cutover process during off-peak hours. + At least one complete data comparison is performed during off-peak + hours. To obtain accurate comparison results, start data comparison + at a specified time point during off-peak hours. If it is needed, + select *Start at a specified time* for *Comparison Time*. Due to + slight time difference and continuous operations on data, + inconsistent comparison results may be generated, reducing the + reliability and validity of the results. + + a. Interrupt services first. If the workload is not heavy, you may + not need to interrupt the services. + + b. Run the following statement on the source database and check + whether any new sessions execute SQL statements within the next + 1 to 5 minutes. If there are no new statements executed, the + service has been stopped. + + ```bash + db.currentOp() + ``` + + :::note + The process list queried by the preceding statement includes the + connection of the DRS replication instance. If no additional + session executes SQL statements, the service has been stopped. + ::: + + c. On the *Migration Progress* page, view the synchronization + delay. When the delay is displayed as 0s and remains stable for + a period, then you can perform a data-level comparison between + the source and destination databases. For details about the time + required, refer to the results of the previous comparison. + + - If there is enough time, compare all objects. + - If there is not enough time, use the data-level comparison + to compare the tables that are frequently used and that + contain key business data or inconsistent data. + + d. Determine an appropriate time to cut the services over to the + destination database. After services are restored and available, + the migration is complete. + +4. Stop or delete the migration task. + + a. Stopping the migration task. After databases and services are + migrated to the destination database, to prevent operations on + the source database from being synchronized to the destination + database to overwrite data, you can stop the migration task. + This operation only deletes the replication instance, and the + migration task is still displayed in the task list. You can view + or delete the task. DRS will not charge for this task after you + stop it. + b. Delete the migration task. After the migration task is complete, + you can delete it. After the migration task is deleted, it will + no longer be displayed in the task list. \ No newline at end of file diff --git a/docs/best-practices/databases/document-database-service/from-other-cloud-mongodb-to-dds.md b/docs/best-practices/databases/document-database-service/from-other-cloud-mongodb-to-dds.md index 27eb89dca..dc40b86c1 100644 --- a/docs/best-practices/databases/document-database-service/from-other-cloud-mongodb-to-dds.md +++ b/docs/best-practices/databases/document-database-service/from-other-cloud-mongodb-to-dds.md @@ -3,3 +3,361 @@ id: from-other-cloud-mongodb-to-dds title: From Other Cloud MongoDB to DDS tags: [dds, migration, mongodb] --- + +# From Other Cloud MongoDB to DDS + +DRS helps you migrate MongoDB databases from other cloud platforms to +DDS on the current cloud. With DRS, you can migrate databases online +with zero downtime and your services and databases can remain +operational during migration. + +## Solution Design + +This section describes how to use DRS to migrate MongoDB databases from +another cloud to DDS on the current cloud. Migration scenarios include: + +### Migrating MongoDB databases from another cloud to DDS on the current cloud + +![**Figure 1** Migrating MongoDB databases from other +clouds](/img/docs/best-practices/databases/document-database-service/en-us_image_0295762499.png) + +### Migrating self-built MongoDB databases from servers on another cloud to DDS on the current cloud + +![**Figure 2** Migrating MongoDB databases from other cloud +servers](/img/docs/best-practices/databases/document-database-service/en-us_image_0295762649.png) + +### Procedure + +![**Figure 3** +Flowchart](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213229532.png) + +:::important + +- Database migration is closely impacted by a wide range of + environmental and operational factors. To ensure the migration goes + smoothly, perform a test run before the actual migration to help you + detect and resolve any potential issues in advance. Recommendations + on how to minimize any potential impacts on your data base are + provided in this section. +- It is strongly recommended that you start your migration task during + off-peak hours. A less active database is easier to migrate + successfully. If the data is fairly static, there is less likely to + be any severe performance impacts during the migration. + +::: + +:::caution +Before creating a migration task, read the migration notes carefully. +::: + +For details, see +[precautions](https://docs.otc.t-systems.com/data-replication-service/umn/real-time_migration/to_the_cloud/index.html#drs-online-migration) +on using specific migration tasks in *Data Replication Service Real-Time +Migration*. + +## Prerequisites + +1. Permissions + + Table 1 below, lists the permissions required for the source and destination + databases when migrating a MongoDB database from another cloud to + DDS on the current cloud. + + | Database | Full Migration Permission | Full+Incremental Migration Permission | + | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | + | Source | **Replica set**: The source database user must have the read permission for the database to be migrated.
**Single node**: The source database user must have the read permission for the database to be migrated.
**Cluster**: The source database user must have the read permission for the databases to be migrated and the config database. To migrate accounts and roles of the source database, the source database user must have the read permission for the `system.users` and `system.roles` system tables of the admin database. | **Replica set**: The source database user must have the read permission for the databases to be migrated and the local database.
**Single node**: The source database user must have the `read` permission for the databases to be migrated and the local database.
**Cluster**: The source mongos node user must have the `readAnyDatabase` permission for the databases to be migrated and the config database. The source shard node user must have the `readAnyDatabase` permission for the **admin** database and the `read` permission for the **local** database. To migrate accounts and roles of the source database, the source database user must have the read permission for the `system.users` and `system.roles` system tables of the **admin** database. | + | Destination | The destination database user must have the `dbAdminAnyDatabase` permission for the **admin** database and the `readWrite` permission for the **destination** database. If the destination database is a cluster instance, the migration account must have the read permission for the config database. | | + + **Table 1** Migration permissions + + - Source database permissions: + + The source MongoDB database user must have all the required + permissions listed in Table 1. + If the permissions are insufficient, create a user that has all + of the permissions on the source database. + + - Destination database permissions: + + If the destination database is a DDS database, the initial + account can be used. + +2. Network settings + + Enable public accessibility for the source database. + + - Source database network settings: + + Any source database MongoDB instances will need to be accessible + from the Internet. + + - Destination database network settings: No settings are required. + +3. Security rules + + - Source database security group settings: + + The replication instance needs to be able to access the source + MongoDB instance. That means that the EIP of the replication + instance must be on the whitelist of the source MongoDB + instance. + + Before configuring the network whitelist, you need to obtain the + EIP of the replication instance. + + - After creating a replication instance on the DRS console, + you can find the EIP on the **Configure Source and + Destination Databases** page as shown in + `Figure 4 `. + + ![**Figure 4** EIP of the replication + instance](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001244078029.png) + + You can also add `0.0.0.0/0` to the source database whitelist to + allow any IP address to access the source database but this + action may result in security risks. + + If you do take this step, then once the migration is complete, + you should delete this item from the whitelist or your system + will insecure. + + - Destination database security group settings: + + By default, the destination database and the DRS replication + instance are in the same VPC and can communicate with each + other. No further configuration is required. + +4. Other + + You need to export the user information of the MongoDB database + first and manually add it to the destination DDS DB instance because + the user information will not be migrated. + +## Migration Procedure + +1. Create a migration task. + + 1. Log in to the management console and choose *Databases* -> + *Data Replication Service* to go to the DRS console. + + 2. On the *Online Migration Management* page, click *Create + Migration Task*. + + 3. On the *Replication Instance Information* page, configure the + task details, description, and replication instance details and + click *Next*. + + ![**Figure 5** Replication instance + information](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001493711038.png) + + | Parameter | Description | + | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Region | The region where the replication instance is deployed. You can change the region. To reduce latency and improve access speed, select the region closest to your workloads. | + | Project | The project corresponds to the current region and can be changed. | + | Task Name | The task name consists of 4 to 50 characters, starts with a letter, and can contain only letters (case-insensitive), digits, hyphens (-), and underscores (\_). | + | Description | The description consists of a maximum of 256 characters and cannot contain the following special characters: =\<\>&'\\" | + + **Table 2** Task settings + + | Parameter | Description | + | --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Data Flow | To the cloud | + | Source DB Engine | Select MongoDB. | + | Destination DB Engine | Select DDS. | + | Network Type | Select Public network. | + | Destination DB Instance | The DDS DB instance you purchased. | + | Replication Instance Subnet | The subnet where the replication instance resides. You can also click View Subnet to go to the network console to view the subnet where the instance resides. By default, the DRS instance and the destination DB instance are in the same subnet. You need to select the subnet where the DRS instance resides, and there are available IP addresses for the subnet. To ensure that the replication instance is successfully created, only subnets with DHCP enabled are displayed. | + | Migration Type | Full This migration type is suitable for scenarios where service interruption is acceptable. All objects in non-system databases are migrated to the destination database at one time. The objects include collections and indexes. Full+Incremental The full+incremental migration type allows you to migrate data without interrupting services. After a full migration initializes the destination database, an incremental migration parses logs to ensure data consistency between the source and destination databases. | + | Source DB Instance Type | If you select Full+Incremental for Migration Type, set this parameter based on the source database. If the source database is a cluster instance, set this parameter to Cluster. If the source database is a replica set or a single node instance, set this parameter to Non-cluster. | + | Obtain Incremental Data | This parameter is available for configuration if Source DB Instance Type is set to Cluster. You can determine how to capture data changes during the incremental synchronization. oplog: For MongoDB 3.2 or later, DRS directly connects to each shard of the source DB instance to extract data. If you select this mode, you must disable the balancer of the source instance. When testing the connection, you need to enter the connection information of each shard node of the source instance. changeStream: This method is recommended. For MongoDB 4.0 and later, DRS connects to mongos nodes of the source instance to extract data. If you select this method, you must enable the WiredTiger storage engine of the source instance. Note Only whitelisted users can use changeStream. To use this function, submit a service ticket. In the upper right corner of the management console, choose Service Tickets > Create Service Ticket to submit a service ticket. | + | Source Shard Quantity | If Source DB Instance Type is set to Cluster and Obtain Incremental Data is set to oplog, enter the number of source shard nodes. The default minimum number of source DB instances is 2 and the maximum number is 32. You can set this parameter based on the number of source database shards. | + + **Table 3** Replication instance settings + + 4. On the *Configure Source and Destination Databases* page, wait + until the replication instance is created. Then, specify source + and destination database information and click *Test + Connection* for both the source and destination databases to + check whether they have been connected to the replication + instance. After the connection tests are successful, select the + check box before the agreement and click *Next*. + + ![**Figure 6** Source database + information](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001151977634.png) + + - Destination database configuration + | Parameter | Description | + | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | + | mongos Address | IP address or domain name of the source database in the IP address/Domain name:Port format. The port of the source database. Range: 1 - 65534 You can enter a maximum of three groups of IP addresses or domain names of the source database. Separate multiple values with commas (,). For example: 192.168.0.1:8080,192.168.0.2:8080. Ensure that the entered IP addresses or domain names belong to the same sharded cluster. Note If multiple IP addresses or domain names are entered, the test connection is successful as long as one IP address or domain name is accessible. Therefore, you must ensure that the IP address or domain name is correct. | + | Authentication Database | The name of the authentication database. For example: The default authentication database of Open Telekom Cloud DDS instance is admin. | + | mongos Username | A username for the source database. | + | mongos Password | The password for the source database username. | + | SSL Connection | SSL encrypts the connections between the source and destination databases. If SSL is enabled, upload the SSL CA root certificate. | + | Sharded Database | Enter the information about the sharded databases in the source database. | + + **Table 4** Source database settings + + | Parameter | Description | + | ----------------- | ------------------------------------------------------------------------------------ | + | DB Instance Name | The DB instance you selected when creating the migration task and cannot be changed. | + | Database Username | The username for accessing the destination database. | + | Database Password | The password for the database username. | + + **Table 5** Destination database settings + + ![**Figure 7** Destination database + information](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097269.png) + + 5. On the *Set Task* page, select migration objects and click + *Next*. + + ![**Figure 8** Migration + object](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097583.png) + + | Parameter | Description | + | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | + | Migrate Account | There are accounts that can be migrated completely and accounts that cannot be migrated. You can choose whether to migrate the accounts. Accounts that cannot be migrated or accounts that are not selected will not exist in the destination database. Ensure that your services will not be affected by these accounts. Yes If you choose to migrate accounts, see [Migrating Accounts](https://docs.otc.t-systems.com/data-replication-service/umn/real-time_migration/task_management/managing_objects/migrating_accounts.html) in Data Replication Service User Guide to migrate database users and roles. No During the migration, accounts and roles are not migrated. | + | Migrate Object | You can choose to migrate all objects, tables, or databases based on your service requirements. All: All objects in the source database are migrated to the destination database. After the migration, the object names will remain the same as those in the source database and cannot be modified. Tables: The selected table-level objects will be migrated. Databases: The selected database-level objects will be migrated. If the source database is changed, click in the upper right corner before selecting migration objects to ensure that the objects to be selected are from the changed source database. Note If you choose not to migrate all of the databases, the migration may fail because the objects, such as stored procedures and views, in the database to be migrated may have dependencies on other objects that are not migrated. To ensure a successful migration, you are advised to migrate all of the databases. When you select an object, the spaces before and after the object name are not displayed. If there are two or more consecutive spaces in the middle of the object name, only one space is displayed. The search function can help you quickly select the required database objects. | + + **Table 6** Migration object + + 6. On the *Check Task* page, check the migration task. + + - If any check fails, review the cause and rectify the fault. + After the fault is rectified, click *Check Again*. + + For details about how to handle check failures, see + [Checking Whether the Source Database Is + Connected](https://docs.otc.t-systems.com/data-replication-service/umn/troubleshooting/solutions_to_failed_check_items/networks/checking_whether_the_source_database_is_connected.html) + in *Data Replication Service User Guide*. + + - If all check items are successful, click *Next*. + + ![**Figure 9** Task + Check](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001152137438.png) + + :::note + You can proceed to the next step only when all check items are + successful. If any alarms are generated, view and confirm the + alarm details first before proceeding to the next step. + ::: + + 7. On the displayed page, specify *Start Time* and confirm that + the configured information is correct and click *Submit* to + submit the task. + + ![**Figure 10** Task startup + settings](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001199158158.png) + + | Parameter | Description | + | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Start Time | Set Start Time to Start upon task creation or Start at a specified time based on site requirements. The Start at a specified time option is recommended. Note The migration task may affect the performance of the source and destination databases. You are advised to start the task in off-peak hours and reserve two to three days for data verification. | + + **Table 7** Task startup settings + + 8. After the task is submitted, go back to the *Online Migration + Management* page to view the task status. + +1. Manage the migration task. + + The migration task contains two phases: full migration and + incremental migration. You can manage them in different phases. + + - Full migration + - Viewing the migration progress: Click the target full + migration task, and on the *Migration Progress* tab, you + can see the migration progress of the structure, data, + indexes, and migration objects. When the progress reaches + 100%, the migration is complete. + - Viewing migration details: In the migration details, you can + view the migration progress of a specific object. If the + number of objects is the same as that of migrated objects, + the migration is complete. You can view the migration + progress of each object in detail. Currently, this function + is available only to whitelisted users. You can submit a + service ticket to apply for this function. + - Incremental Migration Permission + - Viewing the synchronization delay: After the full migration + is complete, an incremental migration starts. On the + *Online Migration Management* page, click the target + migration task. On the displayed page, click *Migration + Progress* to view the synchronization delay of the + incremental migration. If the synchronization delay is 0s, + the destination database is being synchronized with the + source database in real time. You can also view the data + consistency on the *Migration Comparison* tab. + + ![**Figure 11** Viewing the synchronization + delay](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001243756137.png) + + - Viewing the migration results: On the *Online Migration + Management* page, click the target migration task. On the + displayed page, click *Migration Comparison* and perform a + migration comparison in accordance with the comparison + process, which should help you determine an appropriate time + for migration to minimize service downtime. + + ![**Figure 12** Database comparison + process](/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213070166.png) + + For details, see [Comparing Migration + Items](https://docs.otc.t-systems.com/data-replication-service/umn/real-time_migration/task_management/step_4_compare_migration_items.html) + in *Data Replication Service User Guide*. + +2. Cut over services. + + You are advised to start the cutover process during off-peak hours. + At least one complete data comparison is performed during off-peak + hours. To obtain accurate comparison results, start data comparison + at a specified time point during off-peak hours. If it is needed, + select *Start at a specified time* for *Comparison Time*. Due to + slight time difference and continuous operations on data, + inconsistent comparison results may be generated, reducing the + reliability and validity of the results. + + 1. Interrupt services first. If the workload is not heavy, you may + not need to interrupt the services. + + 2. Run the following statement on the source database and check + whether any new sessions execute SQL statements within the next + 1 to 5 minutes. If there are no new statements executed, the + service has been stopped. + + ```bash + db.currentOp() + ``` + + :::note + The process list queried by the preceding statement includes the + connection of the DRS replication instance. If no additional + session executes SQL statements, the service has been stopped. + ::: + + 3. On the *Migration Progress* page, view the synchronization + delay. When the delay is displayed as 0s and remains stable for + a period, then you can perform a data-level comparison between + the source and destination databases. For details about the time + required, refer to the results of the previous comparison. + + - If there is enough time, compare all objects. + - If there is not enough time, use the data-level comparison + to compare the tables that are frequently used and that + contain key business data or inconsistent data. + + 4. Determine an appropriate time to cut the services over to the + destination database. After services are restored and available, + the migration is complete. + +3. Stop or delete the migration task. + + 1. Stopping the migration task. After databases and services are + migrated to the destination database, to prevent operations on + the source database from being synchronized to the destination + database to overwrite data, you can stop the migration task. + This operation only deletes the replication instance, and the + migration task is still displayed in the task list. You can view + or delete the task. DRS will not charge for this task after you + stop it. + 2. Delete the migration task. After the migration task is complete, + you can delete it. After the migration task is deleted, it will + no longer be displayed in the task list. diff --git a/docs/best-practices/networking/domain-name-service/configuring-private-domain-names-for-ecss.md b/docs/best-practices/networking/domain-name-service/configuring-private-domain-names-for-ecss.md new file mode 100644 index 000000000..b0b3292dd --- /dev/null +++ b/docs/best-practices/networking/domain-name-service/configuring-private-domain-names-for-ecss.md @@ -0,0 +1,194 @@ +--- +id: configuring-private-domain-names-for-ecss +title: Configuring Private Domain Names for ECSs +tags: [dns, ecs] +--- + +# Configuring Private Domain Names for ECSs + +If one of your ECSs is not working normally and you need to use the +backup ECS to handle requests, but you have not configured private zones +for the two ECSs, you need to change the private IP address in the code +for the faulty ECS. This will interrupt your services and cause you to +publish your website again, which is time- and labor-consuming. + +Assume that you have configured private zones for the ECSs and have +included their private domain names in the code. If one ECS is +malfunctioning, you only need to change the DNS records to direct +traffic to a normal ECS. Your services will not be interrupted, and you +do not need to publish the website again. + +## Solution Design + +The figure below shows the networking of +a website where ECSs and RDS instances are deployed in a VPC. + +- **ECS0**: primary service node +- **ECS1**: public service node -> **ECS2**: backup service node +- **RDS1**: service database -> **RDS2**: backup database + +![**Figure 1** Networking example](/img/docs/best-practices/networking/domain-name-service/en-us_image_0000001394829705.png) + +:::note + +- **Higher efficiency and security**: You can use private domain names + to access ECSs in the VPCs, without going through the Internet. +- **Easier management**: Compared with IP addresses, domain names are + easier to modify in the code. When an ECS is changed, you only need + to change the DNS records without modifying the code. + +::: + +## Prerequisites + +This table lists private zones and record sets planned for the cloud +servers. + + | Resource | Private Zone | Associated | Private IP | Record Set Type | Description | + | -------- | ------------ | ---------- | ----------- | --------------- | ---------------------------------- | + | ECS1 | api.ecs.com | VPC_001 | 192.168.2.8 | A | Public service node | + | ECS2 | api.ecs.com | VPC_001 | 192.168.3.8 | A | Backup for the public service node | + | RDS1 | db.com | VPC_001 | 192.168.2.5 | A | Service database | + | RDS2 | db.com | VPC_001 | 192.168.3.5 | A | Backup database | + + **Table 1** Private zones and record sets for each server + +| Region | Service | Resource | Description | Quantity | Monthly Price | +| ------ | ------- | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ---------------------------------------------------------------------------------------------------------- | +| eu-de | VPC | VPC_001 | The DNS server addresses must be the same as the private DNS server addresses of Open Telekom Cloud. For details, see [Availability of secondary DNS](https://www.open-telekom-cloud.com/en/support/release-notes/secondary-dns) | 1 | Free | +| | ECS | ECS0 ECS1 ECS2 | Private domain name: `api.ecs.com`
Associated VPC: `VPC_001`
ECS1: public service node
Private IP address: `192.168.2.8`
ECS2: backup service node Private IP address: `192.168.3.8` | 3 | For details, see [ECS Product Pricing Details](https://open-telekom-cloud.com/en/prices/price-calculator). | +| | RDS | RDS1 RDS2 | Private domain name: `db.com`
Associated VPC: `VPC_001`
RDS1: service database
Private IP address: `192.168.2.5`
RDS2: backup database
Private IP address: `192.168.3.5` | 2 | For details, see [RDS Product Pricing Details](https://open-telekom-cloud.com/en/prices/price-calculator). | +| | DNS | api.ecs.com db.com | **api.ecs.com**: Associated VPC: `VPC_001`
Record set type: `A` Value: `192.168.2.8`
**db.com**: Associated VPC: `VPC_001` Record set type: `A` Value: `192.168.2.5` | 2 | Free | + +**Table 2** Resource planning + +## Configuring Private Zones + +### Summary + +The figure below shows the process for configuring private zones: + +![**Figure 2** Process for configuring private +zones](/img/docs/best-practices/networking/domain-name-service/en-us_image_0173959206.png) + +1. (Optional) Create a VPC and a subnet on the VPC console. This + operation is required when you are configuring private domain names + for servers during website deployment. +2. Create private zones and associate them with the VPC and add a + record set to each private zone on the DNS console. +3. (Optional) Change the DNS server addresses of the VPC subnet on the + VPC console. This operation is required when you are configuring + private domain names for servers where your website is running. + +### Procedure + +1. (Optional) Create a VPC and a subnet. + + Before configuring private domain names for the ECSs and databases + required by your website, you need to create a VPC and a subnet. + + a. Log in to the management console. + + b. Click ![image1](/img/docs/best-practices/networking/domain-name-service/en-us_image_0131021386.png) in + the upper left corner and select the desired region and project. + + c. Choose *Network* -> *Virtual Private Cloud*. + + d. In the navigation pane on the left, choose *Virtual Private + Cloud*. + + e. Click *Create VPC* and configure the parameters based on + `Table 3 ` + + f. Click *Create Now*. + +2. Create private zones. + + Create private zones for the domain names used by ECS1 and RDS1. + + a. Choose *Network* -> *Domain Name Service*. + + The DNS console is displayed. + + b. In the navigation pane on the left, choose *Private Zones*. + + c. Click *Create Private Zone*. + + d. Configure the parameters based on + + e. Click *OK*. Then check the private zone created for `api.ecs.com`. + + You can view details about this private zone on the *Private + Zones* page. + + :::note + Click the domain name to view SOA and NS record sets + automatically generated for the zone. + + - The SOA record set identifies the base DNS information about + the domain name. + - The NS record set defines authoritative DNS servers for the + domain name. + + ::: + + f. Repeat steps to create a private zone for `db.com`. + +3. Add a record set to each private zone. + + Add record sets to translate private domain names to private IP + addresses of ECS1 and RDS1. + + a. Click the domain name. + + The record set page is displayed. + + b. Click *Add Record Set*. + + c. Configure the parameters + + d. Click *OK*. An A record set is added for `api.ecs.com`. + + e. Repeat steps to add an A record set for `db.com`. + + Set the record set value of `db.com` to `192.168.2.5`. + +4. (Optional) Change the DNS server addresses of the VPC subnet. + + After you configure private domain names for nodes in the website + application, you need to change the DNS servers of the VPC subnet to + those provided by the DNS service so that the domain names can be + resolved. + + For details, see [How Do I Change Default DNS Servers of an ECS to + Huawei Cloud Private DNS + Servers?](https://support.huaweicloud.com/intl/en-us/dns_faq/dns_faq_005.html) + +5. Switch to the backup ECS. + + When ECS1 becomes faulty, you can switch services to ECS2 by + changing the value of the record set added to private zone + `api.ecs.com`. + + a. Log in to the management console. + + b. Click ![image1](/img/docs/best-practices/networking/domain-name-service/en-us_image_0131021386.png) in + the upper left and select *eu-de*. + + c. Choose *Network* -> *Domain Name Service*. + + The DNS console is displayed. + + d. In the navigation pane on the left, choose *Private Zones*. + + e. In the private zone list, click the name of the zone + `api.ecs.com`. + + f. Locate the A record set and click *Modify* under + *Operation*. + + g. Change the value to `192.168.3.8`. + + h. Click *OK*. + + Traffic to ECS1 will be directed to ECS2 by the private DNS server. \ No newline at end of file diff --git a/docs/best-practices/networking/elastic-load-balancing/routing-traffic-to-backend-servers-in-different-vpcs-from-the-load-balancer.md b/docs/best-practices/networking/elastic-load-balancing/routing-traffic-to-backend-servers-in-different-vpcs-from-the-load-balancer.md new file mode 100644 index 000000000..c39c50e0b --- /dev/null +++ b/docs/best-practices/networking/elastic-load-balancing/routing-traffic-to-backend-servers-in-different-vpcs-from-the-load-balancer.md @@ -0,0 +1,227 @@ +--- +id: routing-traffic-to-backend-servers-in-different-vpcs-from-the-load-balancer +title: Routing Traffic to Backend Servers in Different VPCs from the Load Balancer +tags: [vpc, elb, load-balancing] +--- + +# Routing Traffic to Backend Servers in Different VPCs from the Load Balancer {#elb_bp_0302} + +You can use ELB to route traffic to backend servers in two VPCs +connected over a VPC peering connection. + +## Solution Design + +- A dedicated load balancer named `ELB-Test` is running in + `VPC-Test-01` (`172.18.0.0/24`). +- An ECS named `ECS-Test` is running in `VPC-Test-02` + (`172.17.0.0/24`). +- `IP as a Backend` is enabled for the dedicated load balancer + `ELB-Test`, and `ECS-Test` in `VPC-Test-02` (`172.17.0.0/24`) is + added to the backend server group associated with `ELB-Test`. + +![*Figure 1* +Topology](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059065.png) + +:::note Advantages +You can enable `IP as a Backend` for the dedicated load balancer to +route incoming traffic to servers in different VPCs from the load +balancer. +::: + +### Prerequisites + +| Resource Type | Resource Name | Description | Quantity | +| ---------------------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | +| VPC | VPC-Test-01 | The VPC where ELB-Test is running: `172.18.0.0/24` | 1 | +| | VPC-Test-02 | The VPC where ECS-Test is running: `172.17.0.0/24` | 1 | +| VPC peering connection | Peering-Test | The connection that connects the VPC where ELB-Test is running and the VPC where ECS-Test is running Local VPC: `172.18.0.0/24` Peer VPC: `172.17.0.0/24` | 1 | +| Route table | Route-VPC-Test-01 | The route table of VPC-Test-01 Destination: `172.17.0.0/24` | 1 | +| | Route-VPC-Test-02 | The route table of VPC-Test-02 Destination: `172.18.0.0/24` | 1 | +| ELB | ELB-Test | The dedicated load balancer | 1 | +| EIP | EIP-Test | The EIP bound to ELB-Test | 1 | +| ECS | ECS-Test | The ECS works in VPC-Test-02 | 1 | + +**Table 1** Resource planning + +:::note +To calculate the fees you can visit Open Telekom Cloud [Price +calculator](https://open-telekom-cloud.com/en/prices/price-calculator). +::: + +### Procedure + +![*Figure 2* Process of associating servers in a VPC that is different +from the dedicated load +balancer](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259057.png) + +## Creating VPCs + +1. Log in to the management console. + +2. Under *Networking*, select *Virtual Private Cloud*. On the + *Virtual Private Cloud* page displayed, click *Create VPC*. + +3. Configure the parameters as follows and click *Create Now*. For + details on how to create a VPC, see the [Virtual Private Cloud User + Guide](https://docs.otc.t-systems.com/virtual-private-cloud/umn/vpc_and_subnet/vpc/creating_a_vpc.html). + + - **Name**: `VPC-Test-01` + - **IPv4 CIDR Block**: `172.18.0.0/24` + - Configure other parameters as required. + + ![*Figure 3* Creating + *VPC-Test-01*](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459302.png) + +4. Repeat steps 2 & 3 to create the other VPC. + + - **Name**: `VPC-Test-02` + - **IPv4 CIDR Block**: `172.17.0.0/24` + - Configure other parameters as required. + + ![*Figure 4* Creating + *VPC-Test-02*](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139185.png) + +## Creating a VPC Peering Connection + +1. In the navigation pane on the left, click *VPC Peering*. + +2. In the upper right corner, click *Create VPC Peering Connection*. + +3. Configure the parameters as follows and click *OK*. For details on + how to create a VPC peering connection, see the [Virtual Private + Cloud User + Guide](https://docs.otc.t-systems.com/virtual-private-cloud/umn/vpc_peering_connection/creating_a_vpc_peering_connection_with_another_vpc_in_your_account.html). + + - **Name**: `Peering-Test` + - **Local VPC**: `VPC-Test-01` + - **Peer VPC**: `VPC-Test-02` + - Configure other parameters as required. + + ![*Figure 5* Creating + *Peering-Test*](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779198.png) + +## Adding Routes for the VPC Peering Connection + +1. In the navigation pane on the left, click *Route Tables*. + +2. In the upper right corner, click *Create Route Table*. + +3. Configure the parameters as follows and click *OK*. For details on + how to create a route table, see the [Virtual Private Cloud User + Guide](https://docs.otc.t-systems.com/virtual-private-cloud/umn/route_tables/creating_a_custom_route_table.html). + + - **Name**: `Route-VPC-Test-01` + - **VPC**: `VPC-Test-01` + - **Destination**: `172.17.0.0/24` + - **Next Hop Type**: `VPC peering connection` + - **Next Hop**: `Peering-Test` + + ![*Figure 6* Creating + *Route-VPC-Test-01*](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299482.png) + +4. Repeat steps 2 & 3 to create the other route table. + + - **Name**: `Route-VPC-Test-02` + - **VPC**: `VPC-Test-02` + - **Destination**: `172.18.0.0/24` + - **Next Hop Type**: `VPC peering connection` + - **Next Hop**: `Peering-Test` + +## Creating an ECS + +1. Under *Computing*, click *Elastic Cloud Server*. + +2. In the upper right corner, click *Create ECS*. + +3. Select `VPC-Test-02` as the *VPC* and set *ECS Name* to + `ECS-Test`. Configure other parameters as required. For details, + see [Elastic Cloud Server User + Guide](https://docs.otc.t-systems.com/elastic-cloud-server/umn/getting_started/creating_an_ecs/overview.html). + + ![*Figure 7* Creating + ECS-Test](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619214.png) + +4. Deploy Nginx on the ECS. + + ![*Figure 8* Deploying Nginx on + *ECS-Test*](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939081.png) + +## Creating a Dedicated Load Balancer and Adding an HTTP Listener and a Backend Server Group to the Load Balancer + +1. Under *Networking*, click *Elastic Load Balance*. + +2. In the upper right corner, click *Create Elastic Load Balancer*. + +3. Configure the parameters as follows. For details, see [Elastic Load + Balance User + Guide](https://docs.otc.t-systems.com/elastic-load-balancing/umn/load_balancer/creating_a_dedicated_load_balancer.html). + + - **Type**: `Dedicated` + - **IP as a Backend**: `Enable` + - **VPC**: `VPC-Test-01` + - **Name**: `ELB-Test` + - Configure other parameters as required. + + ![*Figure 9* Creating + *ELB-Test*](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059069.png) + +4. Add an HTTP listener and a backend server group to the dedicated + load balancer. For details, see [Elastic Load Balance User + Guide](https://docs.otc.t-systems.com/elastic-load-balancing/umn/listener/adding_an_http_listener.html). + + ![*Figure 10* HTTP listener and backend server + group](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259061.png) + +## Adding the ECS to the Backend Server Group + +1. Locate the created dedicated load balancer and click its name + *ELB-Test*. + +2. On the *Listeners* tab page, locate the HTTP listener added to the + dedicated load balancer and click its name. + +3. In the *Backend Server Groups* tab on the right, click *IP as + Backend Servers*. + + ![*Figure 11* IP as backend + servers](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459306.png) + +4. Click *Add IP as Backend Server*, configure the parameters, and + click *OK*. For details, see *Elastic Load Balance User Guide*. + + - **Backend Server IP Address**: Private IP address of + `ECS-Test` (`172.17.0.205`) + - **Backend Port**: the port enabled for Nginx on `ECS-Test` + - **Weight**: Set this parameter as required. + + ![*Figure 12* Adding ECS-Test using its IP + address](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139189.png) + +## Verifying Traffic Routing + +:::note +EIP is not necessary as long as you don't want to access the ELB +externally, you can always access the ELB from its private IP. +::: + +1. Locate the dedicated load balancer *ELB-Test* and click *More* + in the *Operation* column. + +2. Select *Bind IPv4 EIP* to bind an EIP to `ELB-Test`. + + ![*Figure 13* EIP bound to the load + balancer](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779202.png) + +3. Enter `http://` in the address box of your browser to + access the dedicated load balancer. If the following page is + displayed, the load balancer routes the request to `ECS-Test`, + which processes the request and returns the requested page. + + :::note + In case of unhealthy connection of the backend server group, check + if the ECS subnet and ELB subnet are associated with the above + created route tables. + ::: + + ![*Figure 14* Verifying that the request is routed to + ECS-Test](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299490.png) diff --git a/docs/best-practices/networking/elastic-load-balancing/routing-traffic-to-backend-servers-in-the-same-vpc-as-the-load-balancer.md b/docs/best-practices/networking/elastic-load-balancing/routing-traffic-to-backend-servers-in-the-same-vpc-as-the-load-balancer.md new file mode 100644 index 000000000..47eb21af3 --- /dev/null +++ b/docs/best-practices/networking/elastic-load-balancing/routing-traffic-to-backend-servers-in-the-same-vpc-as-the-load-balancer.md @@ -0,0 +1,157 @@ +--- +id: routing-traffic-to-backend-servers-in-the-same-vpc-as-the-load-balancer +title: Routing Traffic to Backend Servers in the Same VPC as the Load Balancer +tags: [vpc, elb, load-balancing] +--- + +# Routing Traffic to Backend Servers in the Same VPC as the Load Balancer {#elb_bp_0303} + +You can route traffic to backend servers in the VPC where the load +balancer is running. + +## Solution Design + +- A dedicated load balancer `ELB-Test` is running in a VPC named + `vpc-Test` (`10.1.0.0/16`). +- The backend server `ECS-Test` is also running in `vpc-Test` + (`10.1.0.0/16`). +- `ECS-Test` needs to be added to the backend server group + associated with `ELB-Test`. + +![*Figure 1* Adding a backend server in the same VPC as the load +balancer](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619218.png) + +:::note Advantages +You can add servers in the same VPC as the load balancer to the backend +server group of the load balancer and then route incoming traffic to the +servers. +::: + +### Prerequisites + +:::note +To calculate the fees you can visit Open Telekom Cloud [Price +calculator](https://open-telekom-cloud.com/en/prices/price-calculator). +::: + +| Resource Type | Resource Name | Description | Quantity | +| --- | --- | --- | --- | +| VPC | vpc-Test | The VPC where ELB-Test and ECS-Test are running: `10.1.0.0/16` | 1 | +| ELB | ELB-Test | The dedicated load balancer named ELB-Test | 1 | +| EIP | EIP-Test | The EIP bound to ELB-Test | 1 | +| ECS | ECS-Test | The ECS works in vpc-Test | 1 | + +**Table 1** Resource planning + +### Procedure + +![*Figure 2* Process for adding backend servers in the same VPC as the +load balancer](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059073.png) + +## Creating a VPC + +1. Log in to the management console. + +2. Under *Networking*, select *Virtual Private Cloud*. On the + *Virtual Private Cloud* page displayed, click *Create VPC*. + +3. Configure the parameters as follows and click *Create Now*. For + details on how to create a VPC, see the [Virtual Private Cloud User + Guide](https://docs.otc.t-systems.com/virtual-private-cloud/umn/vpc_and_subnet/vpc/creating_a_vpc.html). + + - **Name**: `vpc-Test` + - **IPv4 CIDR Block**: `10.1.0.0/16` + - Configure other parameters as required. + + ![*Figure 3* Creating + *vpc-Test*](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459326.png) + +## Creating an ECS + +1. Under *Computing*, click *Elastic Cloud Server*. + +2. In the upper right corner, click *Create ECS*. + +3. Configure the parameters as required. For details, see [Elastic + Cloud Server User + Guide](https://docs.otc.t-systems.com/elastic-cloud-server/umn/getting_started/creating_an_ecs/overview.html). + + Select *vpc-Test* for VPC and set *Name* to `ECS-Test`. + + ![*Figure 6* Creating + ECS-Test](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299518.png) + +4. Deploy Nginx on the ECS. + + ![*Figure 7* Deploying Nginx on + *ECS-Test*](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619246.png) + +## Creating a Dedicated Load Balancer and Adding an HTTP Listener and a Backend Server Group to the Load Balancer + +1. Under *Networking*, click *Elastic Load Balance*. + +2. In the upper right corner, click *Create Elastic Load Balancer*. + +3. Configure the parameters as follows. For details, see [Elastic Load + Balance User + Guide](https://docs.otc.t-systems.com/elastic-load-balancing/umn/load_balancer/creating_a_dedicated_load_balancer.html). + + - **Type**: `Dedicated` + - **IP as a Backend**: `Enable` + - **VPC**: `vpc-Test` + - **Name**: `ELB-Test` + - Configure other parameters as required. + + ![*Figure 8* Creating a dedicated load balancer named + *ELB-Test*](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939093.png) + +4. Add an HTTP listener and a backend server group to the created + dedicated load balancer. For details, see [Elastic Load Balance User + Guide](https://docs.otc.t-systems.com/elastic-load-balancing/umn/listener/adding_an_http_listener.html). + +## Adding the ECS to the Backend Server Group + +1. Locate the dedicated load balancer and click its name *ELB-Test*. + +2. On the *Listeners* tab page, locate the HTTP listener added to the + dedicated load balancer and click its name. + +3. In the *Backend Server Groups* tab on the right, click *Add + Backend Server*, configure the parameters, and click *OK*. For + details, see *Elastic Load Balance User Guide*. + + - **Backend Server**: `ECS-Test` + - **Backend Port**: the port enabled for Nginx on `ECS-Test` + - **Weight**: Configure this parameter as required. + + ![*Figure 9* Adding IP as backend + servers](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059081.png) + +## Verifying Traffic Routing + +:::note +EIP is not necessary as long as you don't want to access the ELB +externally, you can always access the ELB from its private IP. +::: + +1. Locate the dedicated load balancer *ELB-Test* and click *More* + in the *Operation* column. + +2. Select *Bind IPv4 EIP* to bind an EIP to `ELB-Test`. + + ![*Figure 10* EIP bound to the load + balancer](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259073.png) + +3. Enter `http://` in the address box of your browser to + access the dedicated load balancer. If the following page is + displayed, the load balancer routes the request to `ECS-Test`, + which processes the request and returns the requested page. + + :::note + In case of unhealthy connection of the backend server group, check + if the ECS subnet and ELB subnet are associated with the above + created route tables. + ::: + + ![*Figure 11* Verifying traffic + routing](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459334.png) diff --git a/docs/best-practices/networking/elastic-load-balancing/using-advanced-forwarding-for-application-iteration.md b/docs/best-practices/networking/elastic-load-balancing/using-advanced-forwarding-for-application-iteration.md new file mode 100644 index 000000000..579c5c296 --- /dev/null +++ b/docs/best-practices/networking/elastic-load-balancing/using-advanced-forwarding-for-application-iteration.md @@ -0,0 +1,240 @@ +--- +id: using-advanced-forwarding-for-application-iteration +title: Using Advanced Forwarding for Application Iteration +tags: [vpc, elb, load-balancing] +--- + +# Using Advanced Forwarding for Application Iteration + +As the business grows, you may need to upgrade your application. Both +the old and new versions are used. Now, the new version is optimized +based on users' feedback, and you want all the users to use the new +version. In this process, you can use advanced forwarding to route +requests to different versions. + +## Solution Design + +### Prerequisites + +- An Open Telekom Cloud account is available and real-name + authentication has been completed. +- The account balance is sufficient to pay for the resources involved + in this best practice. +- Six (6) ECSs are available, with three having the application of the old + version deployed and the other three having the new version + deployed. + +### Procedure + +![*Figure 1* +Flowchart](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221220190.png) + + | Resource Name | Resource Type | Description | +| --------------------- |------------- |---------------------------------------------- | +| ELB-Test | Dedicated | Only dedicated load balancers support advanced load balancer forwarding. | +| Server_Group-Test01 | Backend | Used to manage the ECSs where the application server group of the old version is deployed. | +| Server_Group-Test02 | Backend | Used to manage the ECSs where the application server group of the new version is deployed. | +| ECS01 | ECS | Used to deploy the application of the old version and added to *Server_Group-Test01*. | +| ECS02 | ECS | Used to deploy the application of the old version and added to *Server_Group-Test01*. | +| ECS03 | ECS | Used to deploy the application of the old version and added to *Server_Group-Test01*. | +| ECS04 | ECS | Used to deploy the application of the new version and added to *Server_Group-Test02*. | +| ECS05 | ECS | Used to deploy the application of the new version and added to *Server_Group-Test02*. | +| ECS06 | ECS | Used to deploy the application of the new version and added to *Server_Group-Test02*. | + +**Table 1** Resource planning + +:::note +In this practice, the dedicated load balancer is in the same VPC as the +ECSs. You can also add servers in a different VPC or in an on-premises +data center as needed. For details, see +[Routing Traffic to Backend Servers in Different VPCs](/docs/best-practices/networking/elastic-load-balancing/routing-traffic-to-backend-servers-in-different-vpcs-from-the-load-balancer.md) +::: + +## Configuring a Dedicated Load Balancer + +1. Log in to the management console. + +2. Under *Networking*, click *Elastic Load Balance*. + +3. In the upper right corner, click *Create Elastic Load Balancer*. + +4. Create a dedicated load balancer *ELB-Test*. Configure the + parameters as follows. For details, see [Elastic Load Balance User + Guide](https://docs.otc.t-systems.com/elastic-load-balancing/umn/load_balancer/creating_a_dedicated_load_balancer.html). + + - *Type*: `Dedicated` + - *Name*: `ELB-Test` + - Configure other parameters as required. + +5. Add an HTTP listener to *ELB-Test*. For details, see [Elastic Load + Balance User + Guide](https://docs.otc.t-systems.com/elastic-load-balancing/umn/listener/adding_an_http_listener.html). + + ![*Figure 2* HTTP + listener](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265145841.png) + +6. Enable advanced forwarding. For details, see [Elastic Load Balance + User + Guide](https://docs.otc.t-systems.com/elastic-load-balancing/umn/advanced_features_of_http_https_listeners/advanced_forwarding_dedicated_load_balancers/configuring_advanced_forwarding.html) + + ![*Figure 3* Enabling advanced + forwarding](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001220740254.png) + +## Creating Backend Server Groups and Adding Backend Servers + +1. Locate *ELB-Test* and click its name. + +2. On the *Listeners* tab, click *Create Backend Server Group* in + the upper right corner. + + - Name: `Server_Group-Test01` + - *Backend Protocol*: `HTTP` + - Configure other parameters as required. + +3. Repeat *Step 2* to create backend server group `Server_Group-Test02`. + + ![*Figure 4* Backend server + groups](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265579817.png) + +4. Add `ECS01`, `ECS02`, and `ECS03` to backend server group + `Server_Group-Test01`. + +5. Add `ECS04`, `ECS05`, and `ECS06` to backend server group + `Server_Group-Test02` + +## Forwarding Requests to Different Versions of the Application based on HTTP Request Methods + +Configure two advanced forwarding policies with the HTTP request method +as the condition to route GET and DELETE requests to the application of +the old version and POST and PUT requests to the application of the new +version. When the application of the new version runs stably, direct all +the requests to the application. + +![*Figure 5* Forwarding requests based on HTTP request +methods](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265745537.png) + +1. Locate the dedicated load balancer and click its name *ELB-Test*. + +2. On the *Listeners* tab page, locate the HTTP listener added to the + dedicated load balancer and click its name. + +3. On the *Forwarding Policies* tab page on the right, click *Add + Forwarding Policy* to forward GET and DELETE requests to the old + version. + + Select *GET* and *DELETE* from the *HTTP request method* + drop-down list, select *Forward to backend server group* for + *Action*, and select *Server_Group-Test01* from the *Backend + Server Group* drop-down list. + + ![*Figure 6* Forwarding GET and DELETE requests to the application + of the old + version](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265924809.png) + +4. Click *Save* + +5. Repeat *Step 3* and *Step 4* to add a forwarding policy to forward PUT and POST + requests to the application of the new version. + + Select *PUT* and *POST* from the *HTTP request method* + drop-down list, select *Forward to backend server group* for + *Action*, and select *Server_Group-Test02* from the *Backend + Server Group* drop-down list. + + ![*Figure 7* Forwarding PUT and POST requests to the application + of the new + version](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265646757.png) + +## Forwarding Requests to Different Versions of the Application based on HTTP Headers + +If the old version supports for example both Chinese and English, but the new +version only supports English because the Chinese version is still under +development, you can configure two advanced forwarding policies with the +HTTP header as the condition to route requests to the Chinese +application to the old version and requests to the English application +to the new version. When the application of the new version supports the +Chinese language, direct all the requests to the application. + +![*Figure 8* Smooth application transition between the old and new +versions based on the HTTP request +header](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265465929.png) + +1. Locate the dedicated load balancer and click its name *ELB-Test*. + +2. On the *Listeners* tab page, locate the HTTP listener added to the + dedicated load balancer and click its name. + +3. On the *Forwarding Policies* tab page on the right, and click + *Add Forwarding Policy* to forward requests to the old version. + + Select *HTTP header* from the drop-down list, set the key to + *Accept-Language* and value to *zh-cn*, set the action to + *Forward to backend server group*, and select + *Server_Group-Test01* as the backend server group. + + ![*Figure 9* Forwarding requests to the application of the old + version](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265928345.png) + +4. Click *Save*. + +5. Repeat *Step 3* and *Step 4* to add a forwarding policy to add a forwarding policy to forward requests to the application of the new version. + + Select *HTTP header* from the drop-down list, set the key to + *Accept-Language* and value to *en-us*, set the action to + *Forward to backend server group*, and select + *Server_Group-Test02* as the backend server group. + + ![*Figure 10* Forwarding requests to the application of the new + version](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265488349.png) + +## Forwarding Requests to Different Versions of the Application based on Query Strings + +If the application is deployed across regions, you can configure two +advanced forwarding policies with query string as the condition to +forward requests to the application in region 1 to the old version and +requests to the application in region 2 to the new version. When the +application of the new version runs stably, direct all the requests to +the new version. + +![*Figure 11* Forwarding requests based on query +strings](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221308334.png) + +:::note + +- Dedicated load balancers can distribute traffic across VPCs or + regions. +- In this example, you need to use Cloud Connect to connect the VPCs + in the two regions and then use the dedicated load balancer to route + traffic to backend servers in the two regions. + +::: + +1. Locate the dedicated load balancer and click its name *ELB-Test*. + +2. On the *Listeners* tab page, locate the HTTP listener added to the + dedicated load balancer and click its name. + +3. On the *Forwarding Policies* tab page on the right, and click + *Add Forwarding Policy* to forward requests to application of the + old version. + + Select *Query string* from the drop-down list, set the key to + *region* and value to *region01*, set *Action* to *Forward to + backend server group*, and select *Server_Group-Test01* as the + backend server group. + + ![*Figure 12* Forwarding requests to the old + version](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221328134.png) + +4. Click *Save*. + +5. Repeat *Step 3* and *Step 4* to add a forwarding policy to add a forwarding policy to forward requests to the + application of the new version. + + Select *Query string* from the drop-down list, set the key to + *region* and value to *region02*, set *Action* to *Forward to + backend server group*, and select *Server_Group-Test02* as the + backend server group. + + ![*Figure 13* Forwarding requests to the new + version](/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265648321.png) \ No newline at end of file diff --git a/docs/best-practices/networking/virtual-private-cloud/vpc-and-subnet-planning-suggestions.md b/docs/best-practices/networking/virtual-private-cloud/vpc-and-subnet-planning-suggestions.md new file mode 100644 index 000000000..2b367d48f --- /dev/null +++ b/docs/best-practices/networking/virtual-private-cloud/vpc-and-subnet-planning-suggestions.md @@ -0,0 +1,231 @@ +--- +id: vpc-and-subnet-planning-suggestions +title: VPC and Subnet Planning Suggestions +tags: [vpc] +--- + +# VPC and Subnet Planning Suggestions + +Before creating your VPCs, determine how many VPCs, the number of +subnets, and what IP address ranges or connectivity options you will +need. + +- [VPC and Subnet Planning Suggestions](#vpc-and-subnet-planning-suggestions) + - [How Do I Determine How Many VPCs I Need?](#how-do-i-determine-how-many-vpcs-i-need) + - [One VPC](#one-vpc) + - [Multiple VPCs](#multiple-vpcs) + - [How Do I Plan Subnets?](#how-do-i-plan-subnets) + - [How Do I Plan Routing Policies?](#how-do-i-plan-routing-policies) + - [How Do I Connect to an On-Premises Data Center?](#how-do-i-connect-to-an-on-premises-data-center) + - [How Do I Access the Internet?](#how-do-i-access-the-internet) + - [Use EIPs to enable a small number of ECSs to access the Internet](#use-eips-to-enable-a-small-number-of-ecss-to-access-the-internet) + - [Use a NAT gateway to enable a large number of ECSs to access the Internet](#use-a-nat-gateway-to-enable-a-large-number-of-ecss-to-access-the-internet) + - [Use ELB to access the Internet If there are a large number of concurrent requests](#use-elb-to-access-the-internet-if-there-are-a-large-number-of-concurrent-requests) + - [Additional Resources](#additional-resources) + +## How Do I Determine How Many VPCs I Need? + +VPCs are region-specific. By default, networks in VPCs in different +regions or even in the same region are not connected. + +#### One VPC + +If your services do not require network isolation, a single VPC +should be enough. + +#### Multiple VPCs + +If you have multiple service systems in a region and each service system +requires an isolated network, you can create a separate VPC for each +service system. + +If you require network connectivity between separate VPCs in the same +account or in different accounts, you can use VPC peering connections or +Cloud Connect. + +:::tip +If two VPCs are in the same region, use a [VPC peering +connection](https://docs.otc.t-systems.com/virtual-private-cloud/umn/vpc_peering_connection/vpc_peering_connection_overview.html). +::: + +:::important +By default, you can create a **maximum of five VPCs in each region**. If +this cannot meet your service requirements, request a quota increase. +For details, see [How Do I Apply for a Higher +Quota?](https://docs.otc.t-systems.com/virtual-private-cloud/umn/faq/general_questions/what_is_a_quota.html) +::: + +The following table lists the private CIDR blocks that you can specify +when creating a VPC. Consider the following when selecting a VPC CIDR +block: + +- Number of IP addresses: Reserve sufficient IP addresses in case of + business growth. +- IP address range: Avoid IP address conflicts if you need to connect + a VPC to an on-premises data center or connect two VPCs. + + +| VPC CIDR Block Addresses | IP Address Range | Maximum Number IP | +| ------------------- |-----------------------------| ---------------------------- | +| 10.0.0.0/8-24 | 10.0.0.0-10.255.255.255 | 2\^24-2=16777214 | +| 172.16.0.0/12-24 | 172.16.0.0-172.31.255.255 | 2\^20-2=1048574 | +| 192.168.0.0/16-24 | 192.168.0.0-192.168.255.255 | 2\^16-2=65534 | + + : **Table 1** VPC CIDR blocks + +## How Do I Plan Subnets? + +A subnet is a unique CIDR block with a range of IP addresses in a VPC. +All resources in a VPC must be deployed on subnets. + +- By default, all instances in different subnets of the same VPC can + communicate with each other and the subnets can be located in + different AZs. For example, VPC-A has subnet A01 in AZ A and subnet + A02 in AZ B. Subnet A01 and subnet B01 can communicate with each + other by default. + +- After a subnet is created, its CIDR block cannot be modified. + Subnets in the same VPC cannot overlap. + + When you create a VPC, a default subnet will be created together. If + you need more subnets, see [Creating a Subnet for the + VPC](https://docs.otc.t-systems.com/virtual-private-cloud/umn/vpc_and_subnet/subnet/creating_a_subnet_for_the_vpc.html). + + A subnet mask can be between the netmask of its VPC CIDR block and + /28 netmask. If a VPC CIDR block is 10.0.0.0/16, its subnet mask can + between 16 to 28. + + For example, if the CIDR block of VPC-A is 10.0.0.0/16, you can + specify 10.0.0.0/24 for subnet A01, 10.0.1.0/24 for subnet A02, and + 10.0.3.0/24 for subnet A03. + + :::important + By default, you can create a **maximum of 100 subnets in each region**. + If this cannot meet your service requirements, request a quota + increase by referring to [How Do I Apply for a Higher + Quota?](https://docs.otc.t-systems.com/virtual-private-cloud/umn/faq/general_questions/what_is_a_quota.html) + ::: + +When planning subnets, consider the following: + +- You create different subnets for different modules in a VPC. For + example, in VPC-A, you can create subnet A01 for web services, + subnet A02 for management services, and subnet A03 for data + services. You can leverage network ACLs to control access to each + subnet. +- If your VPC needs to communicate with an on-premises data center + through VPN or Direct Connect, ensure that the VPC subnet and the + CIDR block used for communication in the data center do not overlap. + +## How Do I Plan Routing Policies? + +When you create a VPC, the system automatically generates a default +route table for the VPC. If you create a subnet in the VPC, the subnet +automatically associates with the default route table. A route table +contains a set of routes that are used to determine where network +traffic from your subnets in a VPC is directed. The default route table +ensures that subnets in a VPC can communicate with each other. + +If you do not want to use the default route table, you can now create a +custom route table and associate it with the subnets. The custom route +table associated with a subnet affects only the outbound traffic. The +default route table controls the inbound traffic. + +You can add routes to default and custom route tables and configure the +destination, next hop type, and next hop in the routes to determine +where network traffic is directed. Routes are classified into system +routes and custom routes. + +- **System routes**: Routes that are automatically added by the system and + cannot be modified or deleted. System routes allow instances in a + VPC to communicate with each other. + +- **Custom routes**: Routes that can be modified and deleted. The + destination of a custom route cannot overlap with that of a system + route. + + :::caution + You cannot add two routes with the same destination to a VPC route + table even if their next hop types are different, because the + destination determines the route priority. According to the longest + match routing rule, the destination with a higher matching degree is + preferentially selected for packet forwarding. + ::: + +## How Do I Connect to an On-Premises Data Center? + +If you require interconnection between a VPC and an on-premises data +center, ensure that the VPC does not have an overlapping IP address +range with the on-premises data center to be connected. + +As shown in below, you have VPC 1 in region A and VPC 2 and VPC 3 in region B. +To connect to an on-premises data center, they can use a VPN, as VPC 1 +does in Region A; or a Direct Connect connection, as VPC 2 does in +Region B. VPC 2 connects to the data center through a Direct Connect +connection, but to connect to another VPC in that region, like VPC 3, a +VPC peering connection must be established. + +![**Figure 1** Connections to on-premises data +centers](/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0287297889.png) + +When planning CIDR blocks for VPC 1, VPC 2, and VPC 3: + +- The CIDR block of VPC 1 cannot overlap with the CIDR block of the + on-premises data center in Region A. +- The CIDR block of VPC 2 cannot overlap with the CIDR block of the + on-premises data center in Region B. +- The CIDR blocks of VPC 2 and VPC 3 cannot overlap. + +## How Do I Access the Internet? + +### Use EIPs to enable a small number of ECSs to access the Internet + +When only a few ECSs need to access the Internet, you can bind the EIPs +to the ECSs. This will provide them with Internet access. You can also +dynamically unbind the EIPs from the ECSs and bind them to NAT gateways +and load balancers instead, which will also provide Internet access. The +process is not complicated. + +For more information about EIP, see [EIP +Overview](https://docs.otc.t-systems.com/elastic-ip/umn/service_overview/index.html). + +### Use a NAT gateway to enable a large number of ECSs to access the Internet + +When a large number of ECSs need to access the Internet, the public +cloud provides NAT gateways for your ECSs. With NAT gateways, you do not +need to assign an EIP to each ECS. NAT gateways reduce costs as you do +not need so many EIPs. NAT gateways offer both source network address +translation (SNAT) and destination network address translation (DNAT). +SNAT allows multiple ECSs in the same VPC to share one or more EIPs to +access the Internet. SNAT prevents the EIPs of ECSs from being exposed +to the Internet. DNAT can implement port-level data forwarding. It maps +EIP ports to ECS ports so that the ECSs in a VPC can share the same EIP +and bandwidth to provide Internet-accessible services. + +For more information, see [NAT Gateway User +Guide](https://docs.otc.t-systems.com/nat-gateway/umn/). + +### Use ELB to access the Internet If there are a large number of concurrent requests + +In high-concurrency scenarios, such as e-commerce, you can use load +balancers provided by the ELB service to evenly distribute incoming +traffic across multiple ECSs, allowing a large number of users to +concurrently access your business system or application. ELB is deployed +in the cluster mode. It provides fault tolerance for your applications +by automatically balancing traffic across multiple AZs. You can also +take advantage of deep integration with Auto Scaling (AS), which enables +automatic scaling based on service traffic and ensures service stability +and reliability. + +For more information, see [Elastic Load Balance User +Guide](https://docs.otc.t-systems.com/elastic-load-balancing/umn/). + +## Additional Resources + +:::info See Also + +- [Application Scenarios](https://docs.otc.t-systems.com/virtual-private-cloud/umn/service_overview/application_scenarios.html) +- [Private Network Access](https://support.huaweicloud.com/intl/en-us/bestpractice-vpc/bestpractice_0007.html) +- [Public Network Access](https://support.huaweicloud.com/intl/en-us/bestpractice-vpc/bestpractice_0004.html) + +::: diff --git a/docs/best-practices/storage/elastic-volume-service/raid-array-creation-with-evs-disks.md b/docs/best-practices/storage/elastic-volume-service/raid-array-creation-with-evs-disks.md new file mode 100644 index 000000000..bdc247f78 --- /dev/null +++ b/docs/best-practices/storage/elastic-volume-service/raid-array-creation-with-evs-disks.md @@ -0,0 +1,430 @@ +--- +id: raid-array-creation-with-evs-disks +title: RAID Array Creation with EVS Disks +tags: [storage, evs, raid] +--- + +# RAID Array Creation with EVS Disks + +Redundant Array of Independent Disks (RAID) is a technology that +combines multiple physical disks into one or more logical units for the +purposes of data redundancy and performance improvement. + +:::note +In this document, Elastic Volume Service (EVS) disks instead of physical +disks are used to create RAID arrays. The working principles are the +same. +::: + +## Solution Design + +This document uses CentOS Stream 9 as the sample OS to describe how to +create a RAID 10 array with four EVS disks. A RAID 10 array consists of +RAID 0 and RAID 1 arrays. In this example, EVS disks are used to create +a mirroring array (RAID 1) and then create a RAID 0 array to store data +in stripes. At least **four** EVS disks are required. + +### Prerequisites + +This practice describes the servers and disks planned for creating a RAID +10 array: + +* **ECS Name**: `ecs-raid10` +* **ECS Image**: `CentOS Strean 9` +* **ECS Specifications**: `General computing, s3.medium.2 (1 vCPU, 2 GiB memory)` +* **Elastic IP Address**: `80.158.xxx.xxx` if you want to access it from public internet. Alternatively you could use a bastion. + +:::important +Setting up **RAID 10** requires **at least 4 disks**. Therefore, 4 EVS disks are +created and attached to the ECS in this example. +::: + +## Creating an ECS + +This section shows how to create an ECS. In this example, one ECS needs +to be created. For details about the ECS parameter configurations, see [Prerequisites](#prerequisites): + +1. Log in to the management console. + +2. Under *Computing*, click *Elastic Cloud Server*. + +3. Click *Create ECS*. + + Configure the following parameters as planned: + + * **Image**: Select *CentOS* and choose + `Standard_CentOS_Stream-9_latest(6GB)`. + + * **EIP**: An EIP is mandatory if the ECS needs to access the + public network. In this example, the multiple devices admin + (mdadm) tool needs to be installed. Therefore, an EIP must be + configured. Assign an EIP or configure an existing one based on + the environment condition. + +## Creating and Attaching EVS Disks + +This section shows how to **create four EVS disks in a batch** and attach +the disks to the ECS. + +1. Log in to the management console. + +2. Under *Storage*, click *Elastic Volume Service*. + +3. Click *Create Disk*. + + ![**Figure 1** EVS disk specifications](/img/docs/best-practices/storage/elastic-volume-service/en-us_image_0139689760.png) + +4. Attach the disks to the ECS. + +## Creating a RAID Array Using mdadm + +This section shows how to create a RAID 10 array using mdadm. + +In this example, the ECS runs CentOS Stream 9. Configurations vary +depending on the OS running on the ECS. This section is used for +reference only. For the detailed operations and differences, see the +corresponding OS documents. + +1. Log in to the ECS as user **root**. + +2. Run the following command to view and take note of the device names: + + ```bash + fdisk -l \| grep /dev/vd \| grep -v vda + ``` + + Information similar to the following is displayed: + + ```shell + [root@ecs-raid10 ~]# fdisk -l | grep /dev/vd | grep -v vda + Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors + Disk /dev/vdc: 10.7 GB, 10737418240 bytes, 20971520 sectors + Disk /dev/vdd: 10.7 GB, 10737418240 bytes, 20971520 sectors + Disk /dev/vde: 10.7 GB, 10737418240 bytes, 20971520 sectors + ``` + + In the command output, four disks are attached to the ECS, and the + device names are `/dev/vdb`, `/dev/vdc`, `/dev/vdd`, and + `/dev/vde`, respectively. + +3. Run the following command to install **mdadm**: + + ```bash + yum install mdadm -y + ``` + + :::note + **mdadm** is a utility to create and manage software RAID arrays on + Linux. Ensure that an EIP has been bound to the ECS where mdadm is + to be installed. + ::: + + Information similar to the following is displayed: + + ```shell + [root@ecs-raid10 ~]# yum install mdadm -y + ...... + Installed: + mdadm.x86_64 0:4.0-13.el7 + + Dependency Installed: + libreport-filesystem.x86_64 0:2.1.11-40.el7.centos + + Complete! + ``` + +4. Run the following command to create a RAID array using the four + disks: + + ```bash + mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/vdb /dev/vdc /dev/vdd /dev/vde + ``` + + Parameter description: + + * **RAID array device name**: The value can be user-definable. In + this example, `/dev/md0` is used. + + * **Disk quantity**: Set this parameter based on the actual + condition. In this example, RAID 10 is created, and at least + four disks are required. The minimum number of disks required varies depending on the + RAID level. See [Introduction to Common RAID Arrays](#introduction-to-common-raid-arrays) for details. + + * **RAID level**: Set this parameter based on the actual condition. + In this example, set it to `RAID 10`. + + * **Device name of the disk**: Enter the device names of all the + disks that will be used to create the RAID array. **Multiple names + are separated with spaces**. + + Information similar to the following is displayed: + + ```shell + [root@ecs-raid10 ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/vdb /dev/vdc /dev/vdd /dev/vde + mdadm: layout defaults to n2 + mdadm: layout defaults to n2 + mdadm: chunk size defaults to 512K + mdadm: size set to 10476544K + mdadm: Defaulting to version 1.2 metadata + mdadm: array /dev/md0 started. + ``` + +5. Run the following command to format the created RAID array: + + ```bash + mkfs.ext4 /dev/md0 + ``` + + Information similar to the following is displayed: + + ```shell + [root@ecs-raid10 ~]# mkfs.ext4 /dev/md0 + mke2fs 1.42.9 (28-Dec-2013) + Filesystem label= + OS type: Linux + Block size=4096 (log=2) + Fragment size=4096 (log=2) + Stride=128 blocks, Stripe width=256 blocks + 1310720 inodes, 5238272 blocks + 261913 blocks (5.00%) reserved for the super user + First data block=0 + Maximum filesystem blocks=2153775104 + 160 block groups + 32768 blocks per group, 32768 fragments per group + 8192 inodes per group + Superblock backups stored on blocks: + 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, + 4096000 + + Allocating group tables: done + Writing inode tables: done + Creating journal (32768 blocks): done + Writing superblocks and filesystem accounting information: done + ``` + +6. Run the following command to create a mounting directory: + + ```bash + mkdir /RAID10 + ``` + +7. Run the following command to mount the RAID array: + + ```bash + mount /dev/md0 /RAID10 + ``` + +8. Run the following command to view the mount result: + + ```bash + df -h + ``` + + Information similar to the following is displayed: + + ```shell + [root@ecs-raid10 ~]# df -h + Filesystem Size Used Avail Use% Mounted on + /dev/vda2 39G 1.5G 35G 5% / + devtmpfs 911M 0 911M 0% /dev + tmpfs 920M 0 920M 0% /dev/shm + tmpfs 920M 8.6M 911M 1% /run + tmpfs 920M 0 920M 0% /sys/fs/cgroup + /dev/vda1 976M 146M 764M 17% /boot + tmpfs 184M 0 184M 0% /run/user/0 + /dev/md0 20G 45M 19G 1% /RAID10 + ``` + +9. Perform the following operations to enable automatic mounting of the + RAID array at the system start: + + a. Run the following command to open the **/etc/fstab** file: + + ```bash + vi /etc/fstab + ``` + + b. Press *i* to enter *editing mode*. + + Information similar to the following is displayed: + + ```shell + [root@ecs-raid10 ~]# vi /etc/fstab + + # + # /etc/fstab + # Created by anaconda on Tue Nov 7 14:28:26 2017 + # + # Accessible filesystems, by reference, are maintained under '/dev/disk' + # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info + # + UUID=27f9be47-838b-4155-b20b-e4c5e013cdf3 / ext4 defaults 1 1 + UUID=2b2000b1-f926-4b6b-ade8-695ee244a901 /boot ext4 defaults 1 2 + ``` + + c. Add the following information to the end of the file: + + ```shell + /dev/md0 /RAID10 ext4 defaults 0 0 + ``` + + d. Press *ESC*, enter `:wq!`, and press *ENTER*. + + The system saves the modifications and exits the vi editor. + +10. Run the following command to view the RAID array information: + + ```bash + mdadm -D /dev/md0 + ``` + + Information similar to the following is displayed: + + ```shell + [root@ecs-raid10 ~]# mdadm -D /dev/md0 + /dev/md0: + Version : 1.2 + Creation Time : Thu Nov 8 15:49:02 2018 + Raid Level : raid10 + Array Size : 20953088 (19.98 GiB 21.46 GB) + Used Dev Size : 10476544 (9.99 GiB 10.73 GB) + Raid Devices : 4 + Total Devices : 4 + Persistence : Superblock is persistent + + Update Time : Thu Nov 8 16:15:11 2018 + State : clean + Active Devices : 4 + Working Devices : 4 + Failed Devices : 0 + Spare Devices : 0 + + Layout : near=2 + Chunk Size : 512K + + Consistency Policy : resync + + Name : ecs-raid10.novalocal:0 (local to host ecs-raid10.novalocal) + UUID : f400dbf9:60d211d9:e006e07b:98f8758c + Events : 19 + + Number Major Minor RaidDevice State + 0 253 16 0 active sync set-A /dev/vdb + 1 253 32 1 active sync set-B /dev/vdc + 2 253 48 2 active sync set-A /dev/vdd + 3 253 64 3 active sync set-B /dev/vde + ``` + +## Configuring Automatic Start of the RAID Array at Server Startup + +This section shows how to add RAID array information, such as the device +name and UUID to the mdadm configuration file. In this case, the RAID +array can be started by querying information in the configuration file +when the system starts. + +In this example, the ECS runs CentOS Stream 9. Configurations vary +depending on the OS running on the ECS. This section is used for +reference only. For the detailed operations and differences, see the +corresponding OS documents. + +1. Log in to the ECS as user **root**. + +2. Run the following command to view the RAID array information: + + ```bash + mdadm \--detail \--scan + ``` + + Information similar to the following is displayed: + + ```shell + [root@ecs-raid10 ~]# mdadm --detail --scan + ARRAY /dev/md0 metadata=1.2 name=ecs-raid10.novalocal:0 UUID=f400dbf9:60d211d9:e006e07b:98f8758c + ``` + +3. Perform the following operations to add information of the new RAID array to the mdadm file: + + a. Run the following command to open the **mdadm.conf** file: + + ```bash + vi /etc/mdadm.conf + ``` + + b. Press *i* to enter *editing mode*. + + c. Add the following information to the end of the file: + + ```shell + DEVICE /dev/vdb /dev/vdc /dev/vdd /dev/vde + ARRAY /dev/md0 metadata=1.2 name=ecs-raid10.novalocal:0 UUID=f400dbf9:60d211d9:e006e07b:98f8758c + ``` + + Description: + + - **DEVICE** line: Indicates the device names of the disks that + set up the RAID array. Multiple device names are separated + with spaces. + - **ARRAY** line: Indicates RAID array information. Input the RAID + array information. + + + :::note + The preceding information is used for reference only. Add RAID + array information based on your configuration and outputs. + ::: + + d. Press *ESC*, enter `:wq!`, and press *ENTER*. + + The system saves the modifications and exits the vi editor. + +4. Run the following command to check whether the **mdadm.conf** file + is modified: + + ```bash + more /etc/mdadm.conf + ``` + + Information similar to the following is displayed: + + ```shell + [root@ecs-raid10 ~]# more /etc/mdadm.conf + DEVICE /dev/vdb /dev/vdc /dev/vdd /dev/vde + ARRAY /dev/md0 metadata=1.2 name=ecs-raid10.novalocal:0 UUID=f400dbf9:60d211d9:e006e07b:98f8758c + ``` + + If the information added is displayed, the file is successfully modified. + +## Appendix + +### Introduction to Common RAID Arrays + +* RAID Level: RAID 0 + * Description: RAID 0 stores data on multiple disks, implementing parallel read/write and providing the fastest read/write speed. + * Read/Write Performance: Parallel read/write from multiple disks achieves high performance. + * Security: Worst No redundancy capability. If one disk is damaged, the data of the entire RAID array is unavailable. + * Disk Usage: 100% + * Min. Number of Disks Required: 2 +* RAID Level: RAID 1 + * Description: RAID 1 implements data redundancy based on data mirroring. Half of the disk capacity in the RAID array is used, and the other half is used for mirroring to provide data backup. + * Read/Write Performance: Read performance: Same as a single disk Write performance: Data needs to be written into two disks. The write performance is lower than that of a single disk. + * Security: Highest Provides full backup of disk data. If a disk in the RAID array fails, the system automatically uses the data on the mirror disk. + * Disk Usage: 50% + * Min. Number of Disks Required: 2 +* RAID Level: RAID 01 + * Description: RAID 01 combines RAID 0 and RAID 1, in which half disks are first grouped into RAID 0 stripes and then used together with the other half to set up a RAID 1 array. + * Read/Write Performance: Read performance: Same as RAID 0 Write performance: Same as RAID 1 + * Security: The security of RAID 01 is lower than that of RAID 10. + * Disk Usage: 50% + * Min. Number of Disks Required: 4 +* RAID Level: RAID 10 + * Description: RAID 10 combines RAID 1 and RAID 0, in which half disks are first set up as a RAID 1 array and then used together with the other half to create RAID 0 stripes. + * Read/Write Performance: Read performance: Same as RAID 0 Write performance: Same as RAID 1 + * Security: The security performance of RAID 10 is the same as that of RAID 1. + * Disk Usage: 50% + * Min. Number of Disks Required: 4 +* RAID Level: RAID 5 + * Description: RAID 5 does not specify a dedicated parity disk and consists of block-level striping with parity information distributed among the disks. + * Read/Write Performance: Read performance: Same as RAID 0 Write performance: Because parity data needs to be written into disks, the write performance is lower than that of a single disk. + * Security: The security of RAID 5 is lower than that of RAID 10. + * Disk Usage: 66.7% + * Min. Number of Disks Required: 3 diff --git a/docs/best-practices/storage/object-storage-service/accessing-obs-through-an-nginx-reverse-proxy.md b/docs/best-practices/storage/object-storage-service/accessing-obs-through-an-nginx-reverse-proxy.md new file mode 100644 index 000000000..4e9e5ab55 --- /dev/null +++ b/docs/best-practices/storage/object-storage-service/accessing-obs-through-an-nginx-reverse-proxy.md @@ -0,0 +1,176 @@ +--- +id: accessing-obs-through-an-nginx-reverse-proxy +title: Accessing OBS Through an NGINX Reverse Proxy +tags: [storage, obs, reverse-proxy, nginx] +--- + +# Accessing OBS Through an NGINX Reverse Proxy + +Generally, you can access OBS using a bucket's access domain name [for +example](https://**bucketname**.obs.eu-de.otc.t-systems.com) +provided by OBS or using a user-defined domain name bound to an OBS +bucket. + +In some cases, you may need to use a fixed IP address to access OBS. For +security purposes, some enterprises need to set a blacklist and a +whitelist of external IP addresses. In this case, a fixed IP address is +required. Also for security purposes, an OBS bucket does not have a +fixed IP address, because the DNS service of Open Telekom Cloud resolves +the bucket access domain name to different IP addresses. + +In this case, you can set up an NGINX reverse proxy server on an ECS so +that you can access OBS through a fixed IP address. + +## Solution Design + +This part explains how to deploy NGINX on an ECS and set up an NGINX +reverse proxy server. The proxy is imperceptible. Requests are sent to +the reverse proxy server, which then obtains the required data from OBS +and returns the data to users. The reverse proxy server and OBS work as +a whole. Only the IP address of the proxy server is exposed, while the +actual domain name or IP address of OBS is hidden. + +![*Figure 1* Principles of accessing OBS through an NGINX reverse +proxy](/img/docs/best-practices/storage/object-storage-service/en-us_image_0273872842.png) + +## Prerequisites + +- You have known the region and access domain name of the bucket. For + example, the access domain name of a bucket in the eu-de region is + `nginx-obs.obs.eu-de.otc.t-systems.com`. To obtain the + information, see [Querying Basic Information of a + Bucket](https://docs.otc.t-systems.com/object-storage-service/umn/obs_browser_operation_guide/managing_buckets/viewing_basic_information_of_a_bucket.html). +- You have a Linux ECS **in the same region**. CentOS is used here as an + example. For details, see [Creating an + ECS](https://docs.otc.t-systems.com/elastic-cloud-server/umn/getting_started/creating_an_ecs/index.html). +- The ECS is bound with an EIP, so that you can download the NGINX + installation package over the public network. + +## Installing NGINX on an ECS + +In this example, CentOS Stream 9 is used as an example. + +a. Log in to the ECS where you will set up the NGINX reverse proxy + server. + +b. Run the following command to install NGINX: + + ``` + sudo dnf install nginx + ``` + +c. Run the following commands to start NGINX and configure NGINX to + start upon system startup: + + ``` + sudo systemctl start nginx + sudo systemctl enable nginx + ``` + +d. Run the following commands to allow HTTP and HTTPS traffic and + then restart the firewall: + + ``` + sudo firewall-cmd --permanent --zone=public --add-service=http + sudo firewall-cmd --permanent --zone=public --add-service=https + sudo firewall-cmd --reload + ``` + +e. Use a browser on any device to access `http://**ECS EIP/`. If + the following information is displayed, NGINX is successfully + installed. + + ![*Figure 2* NGINX installed + successfully](/img/docs/best-practices/storage/object-storage-service/en-us_image_0273792190.png) + +## Configuring NGINX as reverse proxy for your OBS bucket + +a. Run the following command to open the *default.conf* file: + + ``` + vim /etc/nginx/conf.d/default.conf + ``` + +b. Press the *i* key to go to the edit mode and modify the + *default.conf* file. + + ```bash + server { + listen 80; + server_name *.*.*.*; # Enter the EIP of the ECS. + proxy_buffering off; # Disable the proxy buffer (memory). + + location / { + proxy_pass https://nginx-obs.obs.eu-de.otc.t-systems.com; #Enter the OBS bucket domain name that starts with http:// or https://. + index index.html index.htm ; #Specify the homepage of the website. If there are multiple files, Nginx checks the files based on their enumeration sequence. + } + } + ``` + + | Parameter | Description | + | --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | server_name | IP address that provides the reverse proxy service. It is the fixed IP address that is exposed to end users for access. Enter the EIP of the ECS where the NGINX reverse proxy service is deployed. | + | proxy_pass | IP address of the proxied server. Enter the OBS bucket access domain name required in [Prerequisites](#prerequisites). The domain name must start with http:// or https://.

Example: [https://nginx-obs.obs.eu-de.otc.t-systems.com](https://nginx-obs.obs.eu-de.otc.t-systems.com) **Note**: When you use an API, SDK, or obsutil for calling, set this parameter to the region domain name. The following is an example: `obs.eu-de.otc.t-systems.com` | + | proxy_buffering | Whether to enable the proxy buffer. The value can be `on` or `off`. If this parameter is set to on, Nginx stores the response returned by the backend in a buffer and then sends the data to the client. If this parameter is set to off, Nginx sends the response to the client as soon as it receives the data from the backend. Default value: `on`

Example: `proxy_buffering off` | + +c. Press the *ESC* key and enter *:wq* to save the + configuration and exit. + +d. Run the following command to check the status of the NGINX + configuration file: + + ``` + sudo nginx -t + ``` + +e. Run the following command to restart the NGINX service for the + configuration to take effect: + + ``` + sudo systemctl restart nginx + ``` + +## Configuring an OBS bucket policy to allow the IP address of the NGINX proxy server to access OBS + +:::note +This step is **optional**. +::: + +If your bucket is publicly read or the URL needs to have a signature +contained when accessing objects in a private bucket, skip this +step. For details, see [Authentication of Signature in a +URL](https://docs.otc.t-systems.com/object-storage-service/api-ref/calling_apis/authentication/authentication_of_signature_in_a_url.html). + +If you do not want URLs containing a signature to access resources +in your private bucket, configure the following bucket policy that +allows only the IP address of the NGINX proxy server to access your +bucket. + +a. In the navigation pane of OBS Console, choose *Object + Storage*. + +b. In the bucket list, click the bucket you want to go to the + *Objects* page. + +c. In the navigation pane, choose *Permissions* -> *Bucket + Policies*. + +d. Click *Create*. + +e. Choose a policy configuration method you like. *Visual Editor* + is used here. + +f. Configure the following parameters. + +g. Click *Create*. + +## Verifying the reverse proxy configuration + +On any device, use the ECS EIP and object name to access specified +OBS resources. If the resources are properly accessed, the +configuration is successful. + +For example, visit `http://**ECS EIP**/otc.jpg`. + +![*Figure 3* Using a fixed IP address to access OBS +resources](/img/docs/best-practices/storage/object-storage-service/en-us_image_0273876194.png) \ No newline at end of file diff --git a/docs/blueprints/by-use-case/devops/ci-jenkins-swr-cce.md b/docs/blueprints/by-use-case/devops/ci-jenkins-swr-cce.md index b94893347..b0d68c13e 100644 --- a/docs/blueprints/by-use-case/devops/ci-jenkins-swr-cce.md +++ b/docs/blueprints/by-use-case/devops/ci-jenkins-swr-cce.md @@ -381,10 +381,10 @@ be iterated due to security risks. Set the following cluster parameters and retain the values for other parameters: - - `Kubernetes URL`: cluster API server address. You can enter `https://kubernetes.default.svc.cluster.local:443`. - - `Kubernetes server certificate key`: your cluster CA certificate. - - `Credentials`: Select the cluster credential added in. You can click *Test Connection* to check whether the cluster is connected. - - `Jenkins URL`: Jenkins access path. Enter the IP address of port 8080 set in installing process **(ports 8080 and 50000 must + - **Kubernetes URL**: cluster API server address. You can enter `https://kubernetes.default.svc.cluster.local:443`. + - **Kubernetes server certificate key**: your cluster CA certificate. + - **Credentials**: Select the cluster credential added in. You can click *Test Connection* to check whether the cluster is connected. + - **Jenkins URL**: Jenkins access path. Enter the IP address of port 8080 set in installing process **(ports 8080 and 50000 must be enabled for the IP address, that is, the intra-cluster access address)**, for example, `http://10.247.222.254:8080`. @@ -393,26 +393,26 @@ be iterated due to security risks. 5. **Pod Template**: Click *Add Pod Template -> Pod Template details* and set the pod template parameters: - - `Name`: `jenkins-agent` - - `Namespace`: `cicd` - - `Labels`: `jenkins-agent` - - `Usage`: Select `Use this node as much as possible`. + - **Name**: `jenkins-agent` + - **Namespace**: `cicd` + - **Labels**: `jenkins-agent` + - **Usage**: Select `Use this node as much as possible`. ![**Figure 2** Basic parameters of the podtemplate](/img/docs/blueprints/by-use-case/devops/cicd-jenkins-swr-cce/en-us_image_0000001399744097.png) - Add a container. Click *Add Container -> Container Template*. - - `Name`: The value must be `jnlp`. - - `Docker image`: `jenkins/inbound-agent:4.13.3-1`. The + - **Name**: The value must be `jnlp`. + - **Docker image**: `jenkins/inbound-agent:4.13.3-1`. The image version may change with time. Select an image version as required or use the latest version. - - `Working directory`: `/home/jenkins/agent` is selected + - **Working directory**: `/home/jenkins/agent` is selected by default. - - `Command to run`/`Arguments to pass to the command`: - Delete the existing default value and leave these two + - **Command to run**/**Arguments to pass to the command**: + Delete the existing default values and leave these two parameters empty. - - `Allocate pseudo-TTY`: Select this parameter. - - Select `Run in privileged mode` and set `Run As User ID` + - Enable **Allocate pseudo-TTY**. + - Enable **Run in privileged** mode and set **Run As User ID** to `0` (**root** user). ![**Figure 3** Container template parameters](/img/docs/blueprints/by-use-case/devops/cicd-jenkins-swr-cce/en-us_image_0000001350206690.png) @@ -446,11 +446,11 @@ file in PKCS#12 format. 4. Click *Add Credential*: - - `Kind`: Select `Certificate`. - - `Scope`: Select `Global`. - - `Certificate`: Select *Upload PKCS#12 certificate* and upload the **cert.pfx** file generated in. - - `Password`: The password customized during **cert.pfx** conversion. - - `ID`: Set this parameter to `k8s-test-cert`, which can be customized. + - **Kind**: Select `Certificate`. + - **Scope**: Select `Global`. + - **Certificate**: Select *Upload PKCS#12 certificate* and upload the **cert.pfx** file generated in. + - **Password**: The password customized during **cert.pfx** conversion. + - **ID**: Set this parameter to `k8s-test-cert`, which can be customized. ![image9](/img/docs/blueprints/by-use-case/devops/cicd-jenkins-swr-cce/en-us_image_0000001400577445.png) @@ -495,16 +495,16 @@ The pipeline creation procedure is as follows: Some parameters in the example need to be modified: - - `git_url`: Address of your code repository. Replace it with the actual address. - - `swr_login`: The login command obtained in [Obtaining a long-term SWR Login Command](#obtaining-a-long-term-swr-login-command) - - `swr_region`: SWR region. - - `organization`: The actual organization name in SWR. - - `build_name`: Name of the created image. - - `credential` The cluster credential added to Jenkins. Enter + - **git_url**: Address of your code repository. Replace it with the actual address. + - **swr_login**: The login command obtained in [Obtaining a long-term SWR Login Command](#obtaining-a-long-term-swr-login-command) + - **swr_region**: SWR region. + - **organization**: The actual organization name in SWR. + - **build_name**: Name of the created image. + - **credential** The cluster credential added to Jenkins. Enter the credential ID. If you want to deploy the service in another cluster, add the access credential of the cluster to Jenkins again. For details, see [Setting Cluster Access Credentials](#setting-cluster-access-credentials) - - `apiserver`: IP address of the API server where the application cluster is deployed. Ensure that the IP address can + - **apiserver**: IP address of the API server where the application cluster is deployed. Ensure that the IP address can be accessed from the Jenkins cluster. diff --git a/sidebars.ts b/sidebars.ts index 90d06eb33..3522fd910 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -213,6 +213,10 @@ const sidebars: SidebarsConfig = { type: 'category', label: 'Elastic Cloud Server', items: [ + // { + // type: 'doc', + // id: 'best-practices/computing/elastic-cloud-server/building-highly-available-web-server-clusters-with-keepalived', + // }, { type: 'link', label: '📚 Go to Help Center', @@ -235,6 +239,10 @@ const sidebars: SidebarsConfig = { type: 'category', label: 'Image Management Service', items: [ + { + type: 'doc', + id: 'best-practices/computing/image-management-service/migrating-service-data-across-accounts-data-disks', + }, { type: 'link', label: '📚 Go to Help Center', @@ -352,6 +360,18 @@ const sidebars: SidebarsConfig = { type: 'category', label: 'Document Database Service', items: [ + { + type: 'doc', + id: 'best-practices/databases/document-database-service/from-ecs-hosted-mongodb-to-dds', + }, + { + type: 'doc', + id: 'best-practices/databases/document-database-service/from-on-premises-mongodb-to-dds', + }, + { + type: 'doc', + id: 'best-practices/databases/document-database-service/from-other-cloud-mongodb-to-dds', + }, { type: 'link', label: '📚 Go to Help Center', @@ -553,6 +573,10 @@ const sidebars: SidebarsConfig = { type: 'category', label: 'Domain Name Service', items: [ + { + type: 'doc', + id: 'best-practices/networking/domain-name-service/configuring-private-domain-names-for-ecss', + }, { type: 'link', label: '📚 Go to Help Center', @@ -575,6 +599,18 @@ const sidebars: SidebarsConfig = { type: 'category', label: 'Elastic Load Balancing', items: [ + { + type: 'doc', + id: 'best-practices/networking/elastic-load-balancing/using-advanced-forwarding-for-application-iteration', + }, + { + type: 'doc', + id: 'best-practices/networking/elastic-load-balancing/routing-traffic-to-backend-servers-in-the-same-vpc-as-the-load-balancer', + }, + { + type: 'doc', + id: 'best-practices/networking/elastic-load-balancing/routing-traffic-to-backend-servers-in-different-vpcs-from-the-load-balancer', + }, { type: 'link', label: '📚 Go to Help Center', @@ -630,6 +666,10 @@ const sidebars: SidebarsConfig = { type: 'category', label: 'Virtual Private Cloud', items: [ + { + type: 'doc', + id: 'best-practices/networking/virtual-private-cloud/vpc-and-subnet-planning-suggestions', + }, { type: 'link', label: '📚 Go to Help Center', @@ -774,6 +814,10 @@ const sidebars: SidebarsConfig = { type: 'category', label: 'Elastic Volume Service', items: [ + { + type: 'doc', + id: 'best-practices/storage/elastic-volume-service/raid-array-creation-with-evs-disks', + }, { type: 'link', label: '📚 Go to Help Center', @@ -785,6 +829,10 @@ const sidebars: SidebarsConfig = { type: 'category', label: 'Object Storage Service', items: [ + { + type: 'doc', + id: 'best-practices/storage/object-storage-service/accessing-obs-through-an-nginx-reverse-proxy', + }, { type: 'link', label: '📚 Go to Help Center', diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001176019342.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001176019342.png new file mode 100644 index 000000000..d46c18d8c Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001176019342.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212487004.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212487004.png new file mode 100644 index 000000000..4e1a0e056 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212487004.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212615152.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212615152.png new file mode 100644 index 000000000..5633f1445 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212615152.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212616602.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212616602.png new file mode 100644 index 000000000..a4bf68dd8 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212616602.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212775562.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212775562.png new file mode 100644 index 000000000..7d77deefa Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212775562.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212776358.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212776358.png new file mode 100644 index 000000000..6d3b52650 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212776358.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212935148.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212935148.png new file mode 100644 index 000000000..9ff3ee76f Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212935148.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212935184.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212935184.png new file mode 100644 index 000000000..f1340bed9 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001212935184.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257296479.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257296479.png new file mode 100644 index 000000000..3ebe183ae Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257296479.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257296509.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257296509.png new file mode 100644 index 000000000..5230c56e0 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257296509.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257326429.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257326429.png new file mode 100644 index 000000000..18ada3181 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257326429.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257415811.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257415811.png new file mode 100644 index 000000000..67a721cec Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257415811.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257496199.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257496199.png new file mode 100644 index 000000000..f86ba0f13 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257496199.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257496345.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257496345.png new file mode 100644 index 000000000..49849fe94 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0000001257496345.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0281926451.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0281926451.png new file mode 100644 index 000000000..e93b27313 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0281926451.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0281926452.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0281926452.png new file mode 100644 index 000000000..20616c84f Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0281926452.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0281926466.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0281926466.png new file mode 100644 index 000000000..8593357cd Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0281926466.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681028.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681028.png new file mode 100644 index 000000000..252fafa31 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681028.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681029.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681029.png new file mode 100644 index 000000000..6cc540643 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681029.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681030.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681030.png new file mode 100644 index 000000000..062ec3ffe Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681030.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681031.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681031.png new file mode 100644 index 000000000..32b57c58d Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681031.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681032.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681032.png new file mode 100644 index 000000000..2f32201d3 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681032.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681033.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681033.png new file mode 100644 index 000000000..d2e59e2fa Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681033.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681034.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681034.png new file mode 100644 index 000000000..e6a7cc9a8 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681034.png differ diff --git a/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681035.png b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681035.png new file mode 100644 index 000000000..417750272 Binary files /dev/null and b/static/img/docs/best-practices/computing/elastic-cloud-server/en-us_image_0285681035.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001138989308.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001138989308.png new file mode 100644 index 000000000..1568fc10b Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001138989308.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001251619009.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001251619009.png new file mode 100644 index 000000000..0f5319f1e Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001251619009.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001251966577.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001251966577.png new file mode 100644 index 000000000..c80ebd1c3 Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001251966577.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001648507829.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001648507829.png new file mode 100644 index 000000000..40ae0fd41 Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0000001648507829.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0140629595.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0140629595.png new file mode 100644 index 000000000..14ec89a6f Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0140629595.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295094264.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295094264.png new file mode 100644 index 000000000..c5d0c1a8f Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295094264.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295099813.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295099813.png new file mode 100644 index 000000000..396a66b0b Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295099813.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295100003.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295100003.png new file mode 100644 index 000000000..8ac58e185 Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295100003.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295117864.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295117864.png new file mode 100644 index 000000000..c9a10b370 Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295117864.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295125718.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295125718.png new file mode 100644 index 000000000..f455ee262 Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295125718.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295125796.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295125796.png new file mode 100644 index 000000000..a27d9f95a Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295125796.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295128562.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295128562.png new file mode 100644 index 000000000..5bf96a0b8 Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295128562.png differ diff --git a/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295129442.png b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295129442.png new file mode 100644 index 000000000..0f024cff5 Binary files /dev/null and b/static/img/docs/best-practices/computing/image-management-service/en-us_image_0295129442.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001151977634.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001151977634.png new file mode 100644 index 000000000..80d35a308 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001151977634.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001151977946.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001151977946.png new file mode 100644 index 000000000..9de4b8631 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001151977946.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001152137438.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001152137438.png new file mode 100644 index 000000000..a6da90146 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001152137438.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097269.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097269.png new file mode 100644 index 000000000..7e65cbab2 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097269.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097583.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097583.png new file mode 100644 index 000000000..8478e0e4a Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001198097583.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001199158158.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001199158158.png new file mode 100644 index 000000000..d4a6da189 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001199158158.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213070166.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213070166.png new file mode 100644 index 000000000..4fca619f0 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213070166.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213229532.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213229532.png new file mode 100644 index 000000000..6f6a91ab9 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001213229532.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001243756137.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001243756137.png new file mode 100644 index 000000000..88346ee74 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001243756137.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001244078029.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001244078029.png new file mode 100644 index 000000000..a279f1ed7 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001244078029.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001493711038.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001493711038.png new file mode 100644 index 000000000..3555df137 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0000001493711038.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0180865321.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0180865321.png new file mode 100644 index 000000000..caa2f575b Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0180865321.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0232589882.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0232589882.png new file mode 100644 index 000000000..963568f0b Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0232589882.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0232605869.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0232605869.png new file mode 100644 index 000000000..a84d17798 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0232605869.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0234000688.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0234000688.png new file mode 100644 index 000000000..9a5fa7210 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0234000688.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762499.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762499.png new file mode 100644 index 000000000..c25c68a25 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762499.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762649.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762649.png new file mode 100644 index 000000000..4af014f4f Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762649.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762692.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762692.png new file mode 100644 index 000000000..b4701dfc8 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762692.png differ diff --git a/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762707.png b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762707.png new file mode 100644 index 000000000..19bd47153 Binary files /dev/null and b/static/img/docs/best-practices/databases/document-database-service/en-us_image_0295762707.png differ diff --git a/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0000001343763404.png b/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0000001343763404.png new file mode 100644 index 000000000..1909444d2 Binary files /dev/null and b/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0000001343763404.png differ diff --git a/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0000001394829705.png b/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0000001394829705.png new file mode 100644 index 000000000..ba9e3dad4 Binary files /dev/null and b/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0000001394829705.png differ diff --git a/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0131021386.png b/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0131021386.png new file mode 100644 index 000000000..e75f40c7e Binary files /dev/null and b/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0131021386.png differ diff --git a/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0173959206.png b/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0173959206.png new file mode 100644 index 000000000..69fbb0a01 Binary files /dev/null and b/static/img/docs/best-practices/networking/domain-name-service/en-us_image_0173959206.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001220740254.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001220740254.png new file mode 100644 index 000000000..93406b9fc Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001220740254.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221220190.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221220190.png new file mode 100644 index 000000000..d05d3b7fc Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221220190.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221308334.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221308334.png new file mode 100644 index 000000000..459f7c496 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221308334.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221328134.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221328134.png new file mode 100644 index 000000000..bee6663b2 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001221328134.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265145841.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265145841.png new file mode 100644 index 000000000..78bda19e8 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265145841.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265465929.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265465929.png new file mode 100644 index 000000000..5c983cf27 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265465929.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265488349.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265488349.png new file mode 100644 index 000000000..ade30d3e8 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265488349.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265579817.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265579817.png new file mode 100644 index 000000000..2a64e73ca Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265579817.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265646757.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265646757.png new file mode 100644 index 000000000..a302fd733 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265646757.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265648321.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265648321.png new file mode 100644 index 000000000..caed2401b Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265648321.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265745537.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265745537.png new file mode 100644 index 000000000..a65723cf4 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265745537.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265924809.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265924809.png new file mode 100644 index 000000000..5251dce2c Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265924809.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265928345.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265928345.png new file mode 100644 index 000000000..850967418 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001265928345.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299482.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299482.png new file mode 100644 index 000000000..15fa99665 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299482.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299490.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299490.png new file mode 100644 index 000000000..042faad10 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299490.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299518.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299518.png new file mode 100644 index 000000000..dcbae0298 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625299518.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459302.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459302.png new file mode 100644 index 000000000..2c0a00818 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459302.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459306.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459306.png new file mode 100644 index 000000000..125be107b Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459306.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459326.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459326.png new file mode 100644 index 000000000..305dea35a Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459326.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459334.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459334.png new file mode 100644 index 000000000..471ed8027 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625459334.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619210.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619210.png new file mode 100644 index 000000000..624e6815b Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619210.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619214.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619214.png new file mode 100644 index 000000000..75463b45d Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619214.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619218.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619218.png new file mode 100644 index 000000000..6eda9dbb9 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619218.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619246.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619246.png new file mode 100644 index 000000000..1d1cd4db5 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625619246.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779198.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779198.png new file mode 100644 index 000000000..2e86ad241 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779198.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779202.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779202.png new file mode 100644 index 000000000..916825eb9 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779202.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779218.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779218.png new file mode 100644 index 000000000..3fded7917 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001625779218.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939077.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939077.png new file mode 100644 index 000000000..5614331a8 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939077.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939081.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939081.png new file mode 100644 index 000000000..ece7cca16 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939081.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939093.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939093.png new file mode 100644 index 000000000..916825eb9 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001673939093.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059065.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059065.png new file mode 100644 index 000000000..da9c98136 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059065.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059069.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059069.png new file mode 100644 index 000000000..8c29385ea Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059069.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059073.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059073.png new file mode 100644 index 000000000..75aeeed8a Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059073.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059081.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059081.png new file mode 100644 index 000000000..2eb61807c Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674059081.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139185.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139185.png new file mode 100644 index 000000000..985089c80 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139185.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139189.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139189.png new file mode 100644 index 000000000..eadb83f9a Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139189.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139197.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139197.png new file mode 100644 index 000000000..5156558e1 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674139197.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259057.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259057.png new file mode 100644 index 000000000..fc37e3f17 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259057.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259061.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259061.png new file mode 100644 index 000000000..45498d4cd Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259061.png differ diff --git a/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259073.png b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259073.png new file mode 100644 index 000000000..c4e629183 Binary files /dev/null and b/static/img/docs/best-practices/networking/elastic-load-balancing/en-us_image_0000001674259073.png differ diff --git a/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0000001124559429.png b/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0000001124559429.png new file mode 100644 index 000000000..5dbacedcb Binary files /dev/null and b/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0000001124559429.png differ diff --git a/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0000001124559441.png b/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0000001124559441.png new file mode 100644 index 000000000..615bd9d3b Binary files /dev/null and b/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0000001124559441.png differ diff --git a/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0141273034.png b/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0141273034.png new file mode 100644 index 000000000..c6c267eb9 Binary files /dev/null and b/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0141273034.png differ diff --git a/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0287297889.png b/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0287297889.png new file mode 100644 index 000000000..4be1389ec Binary files /dev/null and b/static/img/docs/best-practices/networking/virtual-private-cloud/en-us_image_0287297889.png differ diff --git a/static/img/docs/best-practices/storage/elastic-volume-service/en-us_image_0139687404.png b/static/img/docs/best-practices/storage/elastic-volume-service/en-us_image_0139687404.png new file mode 100644 index 000000000..a99448ff1 Binary files /dev/null and b/static/img/docs/best-practices/storage/elastic-volume-service/en-us_image_0139687404.png differ diff --git a/static/img/docs/best-practices/storage/elastic-volume-service/en-us_image_0139689760.png b/static/img/docs/best-practices/storage/elastic-volume-service/en-us_image_0139689760.png new file mode 100644 index 000000000..1eb915d62 Binary files /dev/null and b/static/img/docs/best-practices/storage/elastic-volume-service/en-us_image_0139689760.png differ diff --git a/static/img/docs/best-practices/storage/object-storage-service/en-us_image_0273792190.png b/static/img/docs/best-practices/storage/object-storage-service/en-us_image_0273792190.png new file mode 100644 index 000000000..bc619fb77 Binary files /dev/null and b/static/img/docs/best-practices/storage/object-storage-service/en-us_image_0273792190.png differ diff --git a/static/img/docs/best-practices/storage/object-storage-service/en-us_image_0273872842.png b/static/img/docs/best-practices/storage/object-storage-service/en-us_image_0273872842.png new file mode 100644 index 000000000..50c197bba Binary files /dev/null and b/static/img/docs/best-practices/storage/object-storage-service/en-us_image_0273872842.png differ diff --git a/static/img/docs/best-practices/storage/object-storage-service/en-us_image_0273876194.png b/static/img/docs/best-practices/storage/object-storage-service/en-us_image_0273876194.png new file mode 100644 index 000000000..cef44aae9 Binary files /dev/null and b/static/img/docs/best-practices/storage/object-storage-service/en-us_image_0273876194.png differ