Start the YB-TServer service.
diff --git a/docs/content/latest/deploy/manual-deployment/start-masters.md b/docs/content/latest/deploy/manual-deployment/start-masters.md
index dd4da51210ec..d6b6c9776ccc 100644
--- a/docs/content/latest/deploy/manual-deployment/start-masters.md
+++ b/docs/content/latest/deploy/manual-deployment/start-masters.md
@@ -14,45 +14,49 @@ showAsideToc: true
---
{{< note title="Note" >}}
-
-- The number of nodes of a cluster on which the YB-Master server need to be started **must** equal the replication factor.
+- The number of nodes in a cluster running YB-Masters **must** equal the replication factor.
- The number of comma-separated addresses present in `master_addresses` should also equal the replication factor.
-
+- For running a single cluster across multiple data centers or 2 clusters in 2 data centers, refer to the [Multi-DC Deployments](../../../deploy/multi-dc/) section.
{{< /note >}}
-## Example scenario
-
-Let us assume the following.
-
-- We want to create a a 4 node cluster with replication factor `3`.
- - We would need to run the YB-Master process on only three of the nodes, for example, `node-a`, `node-b`, `node-c`.
- - Let us assume their private IP addresses are `172.151.17.130`, `172.151.17.220`, and `172.151.17.140`.
-- We have multiple data drives mounted on `/home/centos/disk1`, `/home/centos/disk2`.
+This section covers deployment for a single region or data center in a multi-zone/multi-rack configuration. Note that single zone configuration is a special case of multi-zone where all placement related flags are set to the same value across every node.
-This section covers deployment for a single region or zone (or a single data center or rack). Execute the following steps on each of the instances.
+## Example scenario
-## Run YB-Master services with command line parameters
+- Create a 6-node cluster with replication factor 3.
+ - YB-Master server should run on only 3 these nodes but as noted in the next section, the YB-TServer server should run on all 6 nodes.
+ - Assume the 3 YB-Master private IP addresses are `172.151.17.130`, `172.151.17.220` and `172.151.17.140`.
+ - Cloud will be `aws`, region will be `us-west` and the 3 AZs will be `us-west-2a`, `us-west-2b`, `us-west-2c`. 2 nodes will be placed in each AZ in such a way that 1 replica for each tablet (aka shard) gets placed in any 1 node for each AZ.
+- Multiple data drives mounted on `/home/centos/disk1`, `/home/centos/disk2`.
-- Run `yb-master` binary on each of the nodes as shown below. Note how multiple directories can be provided to the `--fs_data_dirs` option. For each YB-Master service, replace the RPC bind address configuration with the private IP address of the host running the YB-Master.
+## Run YB-Master servers with command line parameters
-For the full list of configuration options (or flags), see the [YB-Master reference](../../../reference/configuration/yb-master/).
+Run the `yb-master` server on each of the 3 nodes as shown below. Note how multiple directories can be provided to the `--fs_data_dirs` option. Replace the `rpc_bind_addresses` value with the private IP address of the host as well as the set the `placement_cloud`,`placement_region` and `placement_zone` values appropriately. For single zone deployment, simply use the same value for the `placement_zone` flag.
```sh
$ ./bin/yb-master \
--master_addresses 172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100 \
--rpc_bind_addresses 172.151.17.130 \
--fs_data_dirs "/home/centos/disk1,/home/centos/disk2" \
+ --placement_cloud aws \
+ --placement_region us-west \
+ --placement_zone us-west-2a \
>& /home/centos/disk1/yb-master.out &
```
-## Run YB-Master services with configuration file
+For the full list of configuration options (or flags), see the [YB-Master reference](../../../reference/configuration/yb-master/).
+
+## Run YB-Master servers with configuration file
-- Alternatively, you can also create a `master.conf` file with the following flags and then run `yb-master` with the `--flagfile` option as shown below. For each YB-Master service, replace the RPC bind address configuration option with the private IP address of the YB-Master service.
+Alternatively, you can also create a `master.conf` file with the following flags and then run `yb-master` with the `--flagfile` option as shown below. For each YB-Master server, replace the RPC bind address configuration option with the private IP address of the YB-Master server.
```sh
--master_addresses=172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100
--rpc_bind_addresses=172.151.17.130
--fs_data_dirs=/home/centos/disk1,/home/centos/disk2
+--placement_cloud=aws
+--placement_region=us-west
+--placement_zone=us-west-2a
```
```sh
@@ -61,7 +65,7 @@ $ ./bin/yb-master --flagfile master.conf >& /home/centos/disk1/yb-master.out &
## Verify health
-- Make sure all the 3 yb-masters are now working as expected by inspecting the INFO log. The default logs directory is always inside the first directory specified in the `--fs_data_dirs` flag.
+Make sure all the 3 yb-masters are now working as expected by inspecting the INFO log. The default logs directory is always inside the first directory specified in the `--fs_data_dirs` flag.
```sh
$ cat /home/centos/disk1/yb-data/master/logs/yb-master.INFO
diff --git a/docs/content/latest/deploy/manual-deployment/start-tservers.md b/docs/content/latest/deploy/manual-deployment/start-tservers.md
index 2200cf5979a0..94ae219c8166 100644
--- a/docs/content/latest/deploy/manual-deployment/start-tservers.md
+++ b/docs/content/latest/deploy/manual-deployment/start-tservers.md
@@ -15,49 +15,52 @@ showAsideToc: true
{{< note title="Note" >}}
-The number of nodes of a cluster on which the YB-TServer server needs to be started **must** equal or exceed the replication factor in order for any table to get created successfully.
+- The number of nodes in a cluster running YB-TServers **must** equal or exceed the replication factor in order for any table to get created successfully.
+- For running a single cluster across multiple data centers or 2 clusters in 2 data centers, refer to the [Multi-DC Deployments](../../../deploy/multi-dc/) section.
{{< /note >}}
-## Example scenario
-
-Let us assume the following.
+This section covers deployment for a single region or data center in a multi-zone/multi-rack configuration. Note that single zone configuration is a special case of multi-zone where all placement related flags are set to the same value across every node.
-- We want to create a a 4-node cluster with replication factor of `3`.
- - We would need to run the YB-TServer process on all the 4 nodes say `node-a`, `node-b`, `node-c`, `node-d`
- - Let us assume the master private IP addresses are `172.151.17.130`, `172.151.17.220` and `172.151.17.140` (`node-a`, `node-b`, `node-c`)
-- We have multiple data drives mounted on `/home/centos/disk1`, `/home/centos/disk2`
+## Example scenario
-This section covers deployment for a single region/zone (or a single data center/rack). Execute the following steps on each of the instances.
+- Create a 6-node cluster with replication factor of 3.
+ - YB-TServer server should on all the 6 nodes but as noted in the previous section, the YB-Master server should run on only 3 of these nodes.
+ - Assume the 3 YB-Master private IP addresses are `172.151.17.130`, `172.151.17.220` and `172.151.17.140`.
+ - Cloud will be `aws`, region will be `us-west` and the 3 AZs will be `us-west-2a`, `us-west-2b`, `us-west-2c`. 2 nodes will be placed in each AZ in such a way that 1 replica for each tablet (aka shard) gets placed in any 1 node for each AZ.
+- Multiple data drives mounted on `/home/centos/disk1`, `/home/centos/disk2`
## Run YB-TServer with command line options
-- Run the YB-TServer service (`yb-tserver`) as shown here. Note that all of the master addresses have to be provided using the `--tserver_master_addrs` option. For each YB-TServer, replace the RPC bind address configuration option with the private IP address of the YB-TServer service.
-
-For the full list of configuration options, see the [YB-TServer reference](../../../reference/configuration/yb-tserver/).
+Run the `yb-tserver` server on each of the 6 nodes as shown below. Note that all of the master addresses have to be provided using the `--tserver_master_addrs` option. Replace the `rpc_bind_addresses` value with the private IP address of the host as well as the set the `placement_cloud`,`placement_region` and `placement_zone` values appropriately. For single zone deployment, simply use the same value for the `placement_zone` flag.
```sh
$ ./bin/yb-tserver \
--tserver_master_addrs 172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100 \
--rpc_bind_addresses 172.151.17.130 \
--start_pgsql_proxy \
- --pgsql_proxy_bind_address=172.151.17.130:5433 \
- --cql_proxy_bind_address=172.151.17.130:9042 \
+ --pgsql_proxy_bind_address 172.151.17.130:5433 \
+ --cql_proxy_bind_address 172.151.17.130:9042 \
--fs_data_dirs "/home/centos/disk1,/home/centos/disk2" \
+ --placement_cloud aws \
+ --placement_region us-west \
+ --placement_zone us-west-2a \
>& /home/centos/disk1/yb-tserver.out &
```
+For the full list of configuration options, see the [YB-TServer reference](../../../reference/configuration/yb-tserver/).
+
If you need to turn on the YEDIS API as well, add `--redis_proxy_bind_address=172.151.17.130:6379` to the above list.
{{< note title="Note" >}}
-The number of comma-separated values in the `--tserver_master_addrs` option should match the total number of YB-Master services (or the replication factor).
+The number of comma-separated values in the `--tserver_master_addrs` option should match the total number of YB-Master servers (or the replication factor).
{{< /note >}}
## Run YB-TServer with configuration file
-- Alternatively, you can also create a `tserver.conf` file with the following flags and then run the `yb-tserver` with the `--flagfile` option as shown here. For each YB-TServer service, replace the RPC bind address flags with the private IP address of the host running the YB-TServer service.
+Alternatively, you can also create a `tserver.conf` file with the following flags and then run the `yb-tserver` with the `--flagfile` option as shown here. For each YB-TServer server, replace the RPC bind address flags with the private IP address of the host running the YB-TServer server.
```sh
--tserver_master_addrs=172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100
@@ -66,6 +69,9 @@ The number of comma-separated values in the `--tserver_master_addrs` option shou
--pgsql_proxy_bind_address=172.151.17.130:5433
--cql_proxy_bind_address=172.151.17.130:9042
--fs_data_dirs=/home/centos/disk1,/home/centos/disk2
+--placement_cloud=aws
+--placement_region=us-west
+--placement_zone=us-west-2a
```
Add `--redis_proxy_bind_address=172.22.25.108:6379` to the above list if you need to turn on the YEDIS API as well.
@@ -74,9 +80,68 @@ Add `--redis_proxy_bind_address=172.22.25.108:6379` to the above list if you nee
$ ./bin/yb-tserver --flagfile tserver.conf >& /home/centos/disk1/yb-tserver.out &
```
+## Set replica placement policy
+
+{{< note title="Note" >}}
+
+This step is required for only multi-AZ deployments and can be skipped for a single AZ deployment.
+
+{{< /note >}}
+
+The default replica placement policy when the cluster is first created is to treat all nodes as equal irrespective of the placement_* configuration flags. However, for the current deployment, we want to explicitly place 1 replica of each tablet in each AZ. The following command sets replication factor of 3 across `us-west-2a`, `us-west-2b`, `us-west-2c` leading to such a placement.
+
+On any host running the yb-master, run the following command.
+
+```sh
+$ ./bin/yb-admin \
+ --master_addresses 172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100 \
+ modify_placement_info \
+ aws.us-west.us-west-2a,aws.us-west.us-west-2b,aws.us-west.us-west-2c 3
+```
+
+Verify by running the following.
+
+```sh
+$ curl -s http://
:7000/cluster-config
+```
+
+And confirm that the output looks similar to what is shown below with `min_num_replicas` set to 1 for each AZ.
+
+```
+replication_info {
+ live_replicas {
+ num_replicas: 3
+ placement_blocks {
+ cloud_info {
+ placement_cloud: "aws"
+ placement_region: "us-west"
+ placement_zone: "us-west-2a"
+ }
+ min_num_replicas: 1
+ }
+ placement_blocks {
+ cloud_info {
+ placement_cloud: "aws"
+ placement_region: "us-west"
+ placement_zone: "us-west-2b"
+ }
+ min_num_replicas: 1
+ }
+ placement_blocks {
+ cloud_info {
+ placement_cloud: "aws"
+ placement_region: "us-west"
+ placement_zone: "us-west-2b"
+ }
+ min_num_replicas: 1
+ }
+ }
+}
+```
+
## Verify health
-Make sure all four YB-TServer services are now working as expected by inspecting the INFO log. The default logs directory is always inside the first directory specified in the `--fs_data_dirs` flag.
+Make sure all YB-TServer servers are now working as expected by inspecting the INFO log. The default logs directory is always inside the first directory specified in the `--fs_data_dirs` flag.
You can do this as shown below.
diff --git a/docs/content/latest/deploy/manual-deployment/system-config.md b/docs/content/latest/deploy/manual-deployment/system-config.md
index 180cf73ceb7b..ba2a3d01f5c2 100644
--- a/docs/content/latest/deploy/manual-deployment/system-config.md
+++ b/docs/content/latest/deploy/manual-deployment/system-config.md
@@ -28,7 +28,7 @@ Here's the command to install these packages.
$ sudo yum install -y epel-release ntp
```
-## Setting ulimits
+## ulimits
In Linux, `ulimit` is used to limit and control the usage of system resources (threads, files, and network connections) on a per-process or per-user basis.
@@ -81,7 +81,7 @@ $ ulimit -n
{{< note title="Note" >}}
-- After changing a ulimit setting, the YB-Master and YB-TServer services must be restarted in order for the new settings to take effect. Check the `/proc/` file to see the current settings.
+- After changing a ulimit setting, the YB-Master and YB-TServer servers must be restarted in order for the new settings to take effect. Check the `/proc/` file to see the current settings.
- Changes made using ulimit may revert following a system restart depending on the system configuration.
{{< /note >}}
diff --git a/docs/content/latest/deploy/manual-deployment/verify-deployment.md b/docs/content/latest/deploy/manual-deployment/verify-deployment.md
index 189cd2590b1e..6e5c73f7ac23 100644
--- a/docs/content/latest/deploy/manual-deployment/verify-deployment.md
+++ b/docs/content/latest/deploy/manual-deployment/verify-deployment.md
@@ -13,9 +13,9 @@ isTocNested: true
showAsideToc: true
---
-As before, we shall assume that we brought up a universe on four nodes with replication factor `3`. Let us assume their IP addresses are `172.151.17.130`, `172.151.17.220`, `172.151.17.140` and `172.151.17.150`
+We now have a cluster/universe on 6 nodes with replication factor `3`. Assume their IP addresses are `172.151.17.130`, `172.151.17.220`, `172.151.17.140`, `172.151.17.150`, `172.151.17.160` and `172.151.17.170`. YB-Master servers are running on only the first 3 of these nodes.
-## [Optional] Setup YEDIS service
+## [Optional] Setup YEDIS API
{{< note title="Note" >}}
@@ -23,7 +23,7 @@ If you want this cluster to be able to support Redis clients, you **must** perfo
{{< /note >}}
-While the YCQL and YSQL services are turned on by default after all of the YB-TServers start, the Redis-compatible YEDIS service is off by default. If you want this cluster to be able to support Redis clients, run the following command from any of the 4 instances. The command below will add the special Redis table into the DB and also start the YEDIS server on port 6379 on all instances.
+While the YCQL and YSQL APIs are turned on by default after all of the YB-TServers start, the Redis-compatible YEDIS API is off by default. If you want this cluster to be able to support Redis clients, run the following command from any of the 4 instances. The command below will add the special Redis table into the DB and also start the YEDIS server on port 6379 on all instances.
```sh
$ ./bin/yb-admin --master_addresses 172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100 setup_redis_table
@@ -45,22 +45,22 @@ If this is a public cloud deployment, remember to use the public ip for the node
## Connect clients
-- Clients can connect to YSQL API at:
+- Clients can connect to YSQL API at
```sh
-172.151.17.130:5433,172.151.17.220:5433,172.151.17.140:5433,172.151.17.150:5433
+172.151.17.130:5433,172.151.17.220:5433,172.151.17.140:5433,172.151.17.150:5433,172.151.17.160:5433,172.151.17.170:5433
```
-- Clients can connect to YCQL API at:
+- Clients can connect to YCQL API at
```sh
-172.151.17.130:9042,172.151.17.220:9042,172.151.17.140:9042,172.151.17.150:9042
+172.151.17.130:9042,172.151.17.220:9042,172.151.17.140:9042,172.151.17.150:9042,172.151.17.160:9042,172.151.17.170:9042
```
-- Clients can connect to YEDIS API at:
+- Clients can connect to YEDIS API at
```sh
-172.151.17.130:6379,172.151.17.220:6379,172.151.17.140:6379,172.151.17.150:6379
+172.151.17.130:6379,172.151.17.220:6379,172.151.17.140:6379,172.151.17.150:6379,172.151.17.160:6379,172.151.17.170:6379
```
## Default ports reference
diff --git a/docs/content/latest/deploy/replicate-2dc.md b/docs/content/latest/deploy/multi-dc/2dc-deployment.md
similarity index 80%
rename from docs/content/latest/deploy/replicate-2dc.md
rename to docs/content/latest/deploy/multi-dc/2dc-deployment.md
index 75e7894d0a6a..742947aa319a 100644
--- a/docs/content/latest/deploy/replicate-2dc.md
+++ b/docs/content/latest/deploy/multi-dc/2dc-deployment.md
@@ -5,19 +5,28 @@ description: Two data center (2DC) deployments
beta: /faq/product/#what-is-the-definition-of-the-beta-feature-tag
menu:
latest:
- parent: deploy
- identifier: replicate-2dc
+ parent: multi-dc
+ identifier: 2dc-deployment
weight: 633
+aliases:
+ - /latest/deploy/replicate-2dc/
type: page
isTocNested: true
showAsideToc: true
---
+
+{{< tip title="Recommended Reading" >}}
+
+[9 Techniques to Build Cloud-Native, Geo-Distributed SQL Apps with Low Latency](https://blog.yugabyte.com/9-techniques-to-build-cloud-native-geo-distributed-sql-apps-with-low-latency/) highlights the various multi-DC deployment strategies (including 2DC deployments) for a distributed SQL database like YugabyteDB.
+
+{{< /tip >}}
+
For details on the two data center (2DC) deployment architecture and supported replication scenarios, see [Two data center (2DC) deployments](../../architecture/2dc-deployments).
-Follow the steps below to set up a two data center (2DC) deployment using either unidirectional (aka master-follower) or bidirectional (aka multi-master) replication between the data centers.
+Follow the steps below to set up a 2DC deployment using either unidirectional (aka master-follower) or bidirectional (aka multi-master) replication between the data centers.
-## Set up
+## 1. Set up
### Producer universe
@@ -39,7 +48,7 @@ Make sure to create the same tables as you did for the producer universe.
After creating the required tables, you can now set up aysnchronous replication using the steps below.
-## Unidirectional (aka master-follower) replication
+## 2. Unidirectional (aka master-follower) replication
1. Look up the producer universe UUID and the table IDs for the two tables and the index table on master UI.
@@ -67,13 +76,13 @@ There should be three table IDs in the command above — two of those are YSQL f
{{< /note >}}
-## Bidirectional (aka multi-master) replication
+## 3. Bidirectional (aka multi-master) replication
To set up bidirectional replication, follow the steps above in the Unidirectional replication section above and then do the same steps for the the “yugabyte-consumer” universe.
Note that this time, “yugabyte-producer” will be set up to consume data from “yugabyte-consumer”.
-## Load data into producer universe
+## 4. Load data into producer universe
1. Download the YugabyteDB workload generator JAR file (`yb-sample-apps.jar`) from [GitHub](https://github.com/yugabyte/yb-sample-apps).
@@ -93,7 +102,7 @@ java -jar yb-sample-apps.jar --workload CassandraBatchKeyValue --nodes 127.0.0.1
For bidirectional replication, repeat this step in the "yugabyte-consumer" universe.
-## Verify replication
+## 5. Verify replication
**For unidirectional replication**
diff --git a/docs/content/latest/deploy/multi-dc/3dc-deployment.md b/docs/content/latest/deploy/multi-dc/3dc-deployment.md
new file mode 100644
index 000000000000..575e0f17d01a
--- /dev/null
+++ b/docs/content/latest/deploy/multi-dc/3dc-deployment.md
@@ -0,0 +1,141 @@
+---
+title: Three+ data center (3DC)
+linkTitle: Three+ data center (3DC)
+description: Three or more (3DC) deployments
+menu:
+ latest:
+ parent: multi-dc
+ identifier: 3dc-deployment
+ weight: 632
+type: page
+isTocNested: true
+showAsideToc: true
+---
+
+{{< tip title="Recommended Reading" >}}
+
+[9 Techniques to Build Cloud-Native, Geo-Distributed SQL Apps with Low Latency](https://blog.yugabyte.com/9-techniques-to-build-cloud-native-geo-distributed-sql-apps-with-low-latency/) highlights the various multi-DC deployment strategies (including 3DC deployments) for a distributed SQL database like YugabyteDB.
+
+{{< /tip >}}
+
+Three data center deployments of YugabyteDB are essentially a natural extension of the three availability zone (AZ) deployments documented in the [Manual deployment](../../../deploy/manual-deployment/) section. Equal number of nodes are now placed in each data center of the three data centers. Inside a single data center, a multi-AZ deployment is recommended to ensure resilience against zone failures.
+
+## Example scenario
+
+- Create a 3-node cluster with replication factor `3`.
+ - Cloud will be `aws` and the 3 regions/AZs will be `us-west`/`us-west-2a`, `us-east-1`/`us-east-1a`, `ap-northeast-1`/`ap-northeast-1a`. One node will be placed in each region/AZ in such a way that one replica for each tablet also gets placed in each region/AZ.
+ - Private IP addresses of the 3 nodes are `172.151.17.130`, `172.151.17.220`, and `172.151.17.140`.
+- We have multiple data drives mounted on `/home/centos/disk1`, `/home/centos/disk2`.
+
+## Prerequisites
+
+Follow the [Checklist](../../../deploy/checklist/) to ensure you have prepared the nodes for installing YugabyteDB.
+
+Execute the following steps on each of the instances.
+
+## 1. Install software
+
+Follow the [installation instructions](../../../deploy/manual-deployment/install-software) to install YugabyteDB on each of the nodes.
+
+## 2. Start YB-Masters
+
+Run the `yb-master` server on each of the nodes as shown below. Note how multiple directories can be provided to the `--fs_data_dirs` option. Replace the `rpc_bind_addresses` value with the private IP address of the host as well as the set the `placement_cloud`,`placement_region` and `placement_zone` values appropriately.
+
+```sh
+$ ./bin/yb-master \
+ --master_addresses 172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100 \
+ --rpc_bind_addresses 172.151.17.130 \
+ --fs_data_dirs "/home/centos/disk1,/home/centos/disk2" \
+ --placement_cloud aws \
+ --placement_region us-west \
+ --placement_zone us-west-2a \
+ --leader_failure_max_missed_heartbeat_periods 10 \
+ >& /home/centos/disk1/yb-master.out &
+```
+
+Note that we also set the `leader_failure_max_missed_heartbeat_periods` flag to 10. This flag specifies the maximum heartbeat periods that the leader can fail to heartbeat before the leader is considered to be failed. Since the data is geo-replicated across data centers, RPC latencies are expected to be higher. We use this flag to increase the failure detection interval in such a higher RPC latency deployment. Note that the total failure timeout is now 5 seconds since it is computed by multiplying raft_heartbeat_interval_ms (default of 500ms) with leader_failure_max_missed_heartbeat_periods (current value of 10).
+
+For the full list of configuration flags, see the [YB-Master reference](../../../reference/configuration/yb-master/).
+
+## 3. Start YB-TServers
+
+Run the `yb-tserver` server on each node as shown below . Note that all of the master addresses have to be provided using the `--tserver_master_addrs` option. Replace the `rpc_bind_addresses` value with the private IP address of the host as well as the set the `placement_cloud`,`placement_region` and `placement_zone` values appropriately.
+
+```sh
+$ ./bin/yb-tserver \
+ --tserver_master_addrs 172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100 \
+ --rpc_bind_addresses 172.151.17.130 \
+ --start_pgsql_proxy \
+ --pgsql_proxy_bind_address 172.151.17.130:5433 \
+ --cql_proxy_bind_address 172.151.17.130:9042 \
+ --fs_data_dirs "/home/centos/disk1,/home/centos/disk2" \
+ --placement_cloud aws \
+ --placement_region us-west \
+ --placement_zone us-west-2a \
+ --leader_failure_max_missed_heartbeat_periods 10 \
+ >& /home/centos/disk1/yb-tserver.out &
+```
+
+Note that we also set the `leader_failure_max_missed_heartbeat_periods` flag to 10. This flag specifies the maximum heartbeat periods that the leader can fail to heartbeat before the leader is considered to be failed. Since the data is geo-replicated across data centers, RPC latencies are expected to be higher. We use this flag to increase the failure detection interval in such a higher RPC latency deployment. Note that the total failure timeout is now 5 seconds since it is computed by multiplying raft_heartbeat_interval_ms (default of 500ms) with leader_failure_max_missed_heartbeat_periods (current value of 10).
+
+For the full list of configuration flags, see the [YB-TServer reference](../../../reference/configuration/yb-tserver/).
+
+## 4. Set replica placement policy
+
+The default replica placement policy when the cluster is first created is to treat all nodes as equal irrespective of the placement_* configuration flags. However, for the current deployment, we want to explicitly place 1 replica of each tablet in each region/AZ. The following command sets replication factor of 3 across `us-west-2`/`us-west-2a`, `us-east-1`/`us-east-1a`, `ap-northeast-1`/`ap-northeast-1a` leading to such a placement.
+
+On any host running the yb-master, run the following command.
+
+```sh
+$ ./bin/yb-admin \
+ --master_addresses 172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100 \
+ modify_placement_info \
+ aws.us-west.us-west-2a,aws.us-east-1.us-east-1a,aws.ap-northeast-1.ap-northeast-1a 3
+```
+
+Verify by running the following.
+
+```sh
+$ curl -s http://:7000/cluster-config
+```
+
+And confirm that the output looks similar to what is shown below with `min_num_replicas` set to 1 for each AZ.
+
+```
+replication_info {
+ live_replicas {
+ num_replicas: 3
+ placement_blocks {
+ cloud_info {
+ placement_cloud: "aws"
+ placement_region: "us-west"
+ placement_zone: "us-west-2a"
+ }
+ min_num_replicas: 1
+ }
+ placement_blocks {
+ cloud_info {
+ placement_cloud: "aws"
+ placement_region: "us-east-1"
+ placement_zone: "us-east-1a"
+ }
+ min_num_replicas: 1
+ }
+ placement_blocks {
+ cloud_info {
+ placement_cloud: "aws"
+ placement_region: "ap-northeast-1"
+ placement_zone: "ap-northeast-1a"
+ }
+ min_num_replicas: 1
+ }
+ }
+}
+```
+
+
+## 5. Verify deployment
+
+
+Use the [ysqlsh](../../../../admin/ysqlsh/) (for YSQL API) or [cqlsh](../../../../admin/cqlsh/) (for YCQL API) shells to test connectivity to the cluster.
+
diff --git a/docs/content/latest/deploy/multi-dc/_index.html b/docs/content/latest/deploy/multi-dc/_index.html
new file mode 100644
index 000000000000..dcac4f468a4d
--- /dev/null
+++ b/docs/content/latest/deploy/multi-dc/_index.html
@@ -0,0 +1,47 @@
+---
+title: Multi-DC deployments
+linkTitle: Multi-DC deployments
+description: Multi-DC deployments
+headcontent: Deploy YugabyteDB across multiple data centers (DC).
+image: /images/section_icons/explore/planet_scale.png
+menu:
+ latest:
+ identifier: multi-dc
+ parent: deploy
+ weight: 631
+---
+
+YugabyteDB is a geo-distributed SQL database that can be easily deployed across multiple DCs or cloud regions. There are two primary configurations for such multi-DC deployments.
+
+First configuration uses a single cluster stretched across 3 or more data centers with data getting automatically sharded across all data centers. This configuration is default for Spanner-inspired databases like YugabyteDB. Data replication across data centers is synchronous and is based on the Raft consensus protocol. This means writes are globally consistent and reads are either globally consistent or timeline consistent (when application clients use follower reads). Additionally, resilience against data center failures is fully automatic. However, this configuration has the potential to incur Wide Area Network (WAN) latency in the write path if the data centers are geographically located far apart from each other and are connected through the shared/unreliable Internet.
+
+For users not requiring global consistency and automatic resilience to datacenter failures, the above WAN latency can be eliminated altogether through the second configuration where two independent, single-DC clusters are connected through asynchronous replication based on Change Data Capture.
+
+9 Techniques to Build Cloud-Native, Geo-Distributed SQL Apps with Low Latency highlights the various multi-DC deployment strategies for a distributed SQL database like YugabyteDB. Note that YugabyteDB is the only Spanner-inspired distributed SQL database to support a 2DC deployment.
+
+
+
diff --git a/docs/content/latest/deploy/public-clouds/aws/manual-deployment.md b/docs/content/latest/deploy/public-clouds/aws/manual-deployment.md
index 791d8cbe52dc..885456c7e7e2 100644
--- a/docs/content/latest/deploy/public-clouds/aws/manual-deployment.md
+++ b/docs/content/latest/deploy/public-clouds/aws/manual-deployment.md
@@ -313,7 +313,7 @@ for ip in $ALL_NODES; do \
done
```
-The advantage of using symbolic links (symlinks) is that, when you later need to do a rolling software upgrade, you can upgrade YB-Master and YB-TServer services one at a time by stopping the YB-Master service, switching the link to the new release, and starting the YB-Master service. Then, do the same for YB-TServer services.
+The advantage of using symbolic links (symlinks) is that, when you later need to do a rolling software upgrade, you can upgrade YB-Master and YB-TServer servers one at a time by stopping the YB-Master server, switching the link to the new release, and starting the YB-Master server. Then, do the same for YB-TServer servers.
## 3. Prepare YB-Master configuration files
@@ -402,7 +402,7 @@ done
)
```
-### Create configuration file for AZ2 YB-TServer services
+### Create configuration file for AZ2 YB-TServer servers
```sh
(CLOUD=aws; REGION=us-west; AZ=us-west-2b; CONFIG_FILE=~/yb-conf/tserver.conf; \
@@ -424,7 +424,7 @@ done
)
```
-### Create configuration file for AZ3 YB-TServer services
+### Create configuration file for AZ3 YB-TServer servers
```sh
(CLOUD=aws; REGION=us-west; AZ=us-west-2c; CONFIG_FILE=~/yb-conf/tserver.conf; \
@@ -457,9 +457,9 @@ for ip in $ALL_NODES; do \
done
```
-## 5. Start YB-Master services
+## 5. Start YB-Master servers
-Note: On the first time when all three YB-Master services are started, it creates the cluster. If a YB-Master service is restarted (after cluster has been created) such as during a rolling upgrade of software it simply rejoins the cluster.
+Note: On the first time when all three YB-Master servers are started, it creates the cluster. If a YB-Master server is restarted (after cluster has been created) such as during a rolling upgrade of software it simply rejoins the cluster.
```sh
for ip in $MASTER_NODES; do \
@@ -472,7 +472,7 @@ done
### Verify
-Verify that the YB-Master services are running.
+Verify that the YB-Master servers are running.
```sh
for ip in $MASTER_NODES; do \
@@ -481,7 +481,7 @@ for ip in $MASTER_NODES; do \
done
```
-Check the YB-Master UI by going to any of the 3 YB-Master services.
+Check the YB-Master UI by going to any of the 3 YB-Master servers.
```
http://:7000/
@@ -495,11 +495,11 @@ $ links http://:7000/
### Troubleshooting
-Make sure all the ports detailed in the earlier section are opened up. Else, check the log at `/mnt/d0/yb-master.out` for `stdout` or `stderr` output from the YB-Master service. Also, check INFO/WARNING/ERROR/FATAL glogs output by the process in the `/mnt/d0/yb-data/master/logs/*`
+Make sure all the ports detailed in the earlier section are opened up. Else, check the log at `/mnt/d0/yb-master.out` for `stdout` or `stderr` output from the YB-Master server. Also, check INFO/WARNING/ERROR/FATAL glogs output by the process in the `/mnt/d0/yb-data/master/logs/*`
-## 6. Start YB-TServer services
+## 6. Start YB-TServer servers
-After starting all the YB-Master services in the previous step, start YB-TServer services on all the nodes.
+After starting all the YB-Master servers in the previous step, start YB-TServer servers on all the nodes.
```sh
for ip in $ALL_NODES; do \
@@ -510,7 +510,7 @@ for ip in $ALL_NODES; do \
done
```
-Verify that the YB-TServer services are running.
+Verify that the YB-TServer servers are running.
```sh
for ip in $ALL_NODES; do \
@@ -523,9 +523,7 @@ done
Note: This example is a multi-AZ (single region deployment).
-The default replica placement policy when the cluster is first created is to treat all nodes as equal irrespective of the placement_* configuration flags.
-However, for the current deployment, we want to explicitly place 1 replica in each AZ.
-The following command sets replication factor of 3 across `us-west-2a`, `us-west-2b` and `us-west-2c` leading to the placement of 1 replica in each AZ.
+The default replica placement policy when the cluster is first created is to treat all nodes as equal irrespective of the placement_* configuration flags. However, for the current deployment, we want to explicitly place 1 replica in each AZ. The following command sets replication factor of 3 across `us-west-2a`, `us-west-2b` and `us-west-2c` leading to the placement of 1 replica in each AZ.
```sh
ssh -i $PEM $ADMIN_USER@$MASTER1 \
@@ -574,11 +572,7 @@ replication_info {
}
```
-Suppose your deployment is multi-region rather than multi-zone, one additional
-option to consider is to set a preferred location for all the tablet leaders
-using the [set_preferred_zones yb-admin command](../../../admin/yb-admin).
-For multi-row/multi-table transactional operations, colocating the leaders to be in a single zone/region can help reduce the number of
-cross-region network hops involved in executing the transaction and as a result improve performance.
+Suppose your deployment is multi-region rather than multi-zone, one additional option to consider is to set a preferred location for all the tablet leaders using the [set_preferred_zones yb-admin command](../../../admin/yb-admin). For multi-row/multi-table transactional operations, colocating the leaders to be in a single zone/region can help reduce the number of cross-region network hops involved in executing the transaction and as a result improve performance.
The following command sets the preferred zone to `aws.us-west.us-west-2c`:
@@ -631,7 +625,7 @@ replication_info {
## 8. Test PostgreSQL-compatible YSQL API
Connect to the cluster using the `ysqlsh` utility that comes pre-bundled in the `bin` directory.
-If you need to try `ysqlsh` from a different node, you can download `ysqlsh` using instructions documented [here](../../../develop/tools/ysqlsh/).
+If you need to try `ysqlsh` from a different node, you can download `ysqlsh` using instructions documented [here](../../../admin/ysqlsh/).
From any node, execute the following command.
@@ -669,7 +663,7 @@ Output should be the following:
### Using cqlsh
-Connect to the cluster using the `cqlsh` utility that comes pre-bundled in the `bin` directory. If you need to try cqlsh from a different node, you can download cqlsh using instructions documented [here](../../../develop/tools/cqlsh/).
+Connect to the cluster using the `cqlsh` utility that comes pre-bundled in the `bin` directory. If you need to try cqlsh from a different node, you can download cqlsh using instructions documented [here](../../../admin/cqlsh/).
From any node, execute the following command.
diff --git a/docs/content/latest/develop/ecosystem-integrations/apache-kafka.md b/docs/content/latest/develop/ecosystem-integrations/apache-kafka.md
index b2def9b78dfc..8bdf33e0566c 100644
--- a/docs/content/latest/develop/ecosystem-integrations/apache-kafka.md
+++ b/docs/content/latest/develop/ecosystem-integrations/apache-kafka.md
@@ -13,7 +13,7 @@ isTocNested: true
showAsideToc: true
---
-In this tutorial, we are going to use the [Kafka Connect-based Sink Connector for YugabyteDB](https://github.com/yugabyte/yb-kafka-connector) to store events from Apache Kafka into YugabyteDB using YugabyteDB's [YCQL](../../../api/ycql) API.
+In this tutorial, we are going to use the [Kafka Connect-based Sink Connector for YugabyteDB](https://github.com/yugabyte/yb-kafka-connector) to store events from Apache Kafka into YugabyteDB using the [YCQL](../../../api/ycql) API.
## 1. Start local cluster
diff --git a/docs/content/latest/explore/binary/auto-sharding.md b/docs/content/latest/explore/binary/auto-sharding.md
index a04650a43531..2f7aa1e4c1c3 100644
--- a/docs/content/latest/explore/binary/auto-sharding.md
+++ b/docs/content/latest/explore/binary/auto-sharding.md
@@ -19,7 +19,7 @@ $ ./bin/yb-ctl --rf 1 --num_shards_per_tserver 4 create \
--tserver_flags "memstore_size_mb=1"
```
-This example creates a universe with one node. Now, let's add two more nodes to make this a 3-node, rf=1 universe. We need to pass the memstore size flag to each of the added YB-TServer services. You can do that by running the following:
+This example creates a universe with one node. Now, let's add two more nodes to make this a 3-node, rf=1 universe. We need to pass the memstore size flag to each of the added YB-TServer servers. You can do that by running the following:
```sh
$ ./bin/yb-ctl add_node --tserver_flags "memstore_size_mb=1"
@@ -29,7 +29,7 @@ $ ./bin/yb-ctl add_node --tserver_flags "memstore_size_mb=1"
$ ./bin/yb-ctl add_node --tserver_flags "memstore_size_mb=1"
```
-We can check the status of the cluster to confirm that we have three YB-TServer services.
+We can check the status of the cluster to confirm that we have three YB-TServer servers.
```sh
$ ./bin/yb-ctl status
diff --git a/docs/content/latest/explore/binary/two-data-centers.md b/docs/content/latest/explore/binary/two-data-centers.md
index d69d7953f30e..bca14427b801 100644
--- a/docs/content/latest/explore/binary/two-data-centers.md
+++ b/docs/content/latest/explore/binary/two-data-centers.md
@@ -100,7 +100,7 @@ To configure "Data Center - West" to be the consumer of data changes from the "D
yb-admin -master_addresses setup_universe_replication
```
-- *consumer-master-addresses*: a comma-separated list of the YB-Master services. For this simulation, you have one YB-Master service for each cluster (typically, there are three).
+- *consumer-master-addresses*: a comma-separated list of the YB-Master servers. For this simulation, you have one YB-Master server for each cluster (typically, there are three).
- *producer-universe-uuid*: a unique identifier for the producer cluster. The UUID can be found in the YB-Master UI (`:7000`).
- *producer-table-ids*: A comma-separated list of `table_id` values (the generated UUIDs can be found in the YB-Master UI (`:7000`).
diff --git a/docs/content/latest/faq/compatibility.md b/docs/content/latest/faq/compatibility.md
index 4b17b7411fc9..99f80aa1cc28 100644
--- a/docs/content/latest/faq/compatibility.md
+++ b/docs/content/latest/faq/compatibility.md
@@ -25,13 +25,13 @@ The [YSQL](../../api/ysql) API is compatible with PostgreSQL. This means Postgre
- YugabyteDB's API compatibility is aimed at accelerating developer onboarding. By integrating well with the existing ecosystem, YugabyteDB ensures that developers can get started easily using a language they are already comfortable with.
-- YugabyteDB's API compatibility is not aimed at lift-and-shift porting of existing applications written for the original language. This is because existing applications are not written to take advantage of the distributed SQL APIs provided by YugabyteDB. For such existing applications, developers should expect to modify their previously monolithic PostgreSQL and/or non-transactional Cassandra data access logic as they look to migrate to YugabyteDB.
+- YugabyteDB's API compatibility is not aimed at lift-and-shift porting of existing applications written for the original language. This is because existing applications are not written to take advantage of the distributed, strongly-consistent storage architecture that YugabyteDB provides. For such existing applications, developers should expect to modify their previously monolithic PostgreSQL and/or non-transactional Cassandra data access logic as they look to migrate to YugabyteDB.
## YSQL compatibility with PostgreSQL
### What is the extent of compatibility with PostgreSQL?
-As highlighted in [Distributed PostgreSQL on a Google Spanner Architecture – Query Layer](https://blog.yugabyte.com/distributed-postgresql-on-a-google-spanner-architecture-query-layer/), YSQL reuses open source PostgreSQL’s query layer (written in C) as much as possible and as a result is wire-compatible with PostgreSQL dialect and client drivers. Specifically, YSQL v1.2 is based on PostgreSQL v11.2. Following are some of the currently supported features:
+As highlighted in [Distributed PostgreSQL on a Google Spanner Architecture – Query Layer](https://blog.yugabyte.com/distributed-postgresql-on-a-google-spanner-architecture-query-layer/), YSQL reuses open source PostgreSQL’s query layer (written in C) as much as possible and as a result is wire-compatible with PostgreSQL dialect and client drivers. Specifically, YSQL is based on PostgreSQL v11.2. Following are some of the currently supported features:
- DDL statements: CREATE, DROP and TRUNCATE tables
- Data types: All primitive types including numeric types (integers and floats), text data types, byte arrays, date-time types, UUID, SERIAL, as well as JSONB
diff --git a/docs/content/latest/faq/product.md b/docs/content/latest/faq/product.md
index 52af14a994c7..cb491e163ecf 100644
--- a/docs/content/latest/faq/product.md
+++ b/docs/content/latest/faq/product.md
@@ -44,7 +44,7 @@ The next major release is the v2.1 release in Winter 2020.
## Can I deploy YugabyteDB to production?
-Yes, both YugabyteDB APIs are production ready. [YCQL](https://blog.yugabyte.com/yugabyte-db-1-0-a-peek-under-the-hood/) achieved this status starting with v1.0 in May 2018. [YSQL](https://blog.yugabyte.com/announcing-yugabyte-db-2-0-ga:-jepsen-tested,-high-performance-distributed-sql/) achieved this status starting v2.0 in September 2019.
+Yes, both YugabyteDB APIs are production ready. [YCQL](https://blog.yugabyte.com/yugabyte-db-1-0-a-peek-under-the-hood/) achieved this status starting with v1.0 in May 2018 while [YSQL](https://blog.yugabyte.com/announcing-yugabyte-db-2-0-ga:-jepsen-tested,-high-performance-distributed-sql/) became production ready starting v2.0 in September 2019.
## Which companies are currently using YugabyteDB in production?
@@ -78,16 +78,102 @@ Details for both the above benchhmarks are published in [Building a Strongly Con
Starting with [v1.3](https://blog.yugabyte.com/announcing-yugabyte-db-v1-3-with-enterprise-features-as-open-source/), YugabyteDB is 100% open source. It is licensed under Apache 2.0 and the source is available on [GitHub](https://github.com/yugabyte/yugabyte-db).
-## How does YugabyteDB, Yugabyte Platform and Yugabyte Cloud differ from each other?
+## How do YugabyteDB, Yugabyte Platform and Yugabyte Cloud differ from each other?
-[YugabyteDB](../../quick-start/) is the best choice for the startup organizations with strong technical operations expertise looking to deploy YugabyteDB into production with traditional DevOps tools.
+[YugabyteDB](../../quick-start/) is the 100% open source core database. It is the best choice for the startup organizations with strong technical operations expertise looking to deploy to production with traditional DevOps tools.
-[Yugabyte Platform](../../deploy/enterprise-edition/) is commercial software for running a self-managed DB-as-a-Service. It has built-in cloud native operations, enterprise-grade deployment options and world-class support. It is the simplest way to run YugabyteDB in mission-critical production environments with one or more regions (across both public cloud and on-premise data centers).
+[Yugabyte Platform](../../deploy/enterprise-edition/) is commercial software for running a self-managed YugabyteDB-as-a-Service. It has built-in cloud native operations, enterprise-grade deployment options and world-class support. It is the simplest way to run YugabyteDB in mission-critical production environments with one or more regions (across both public cloud and on-premise data centers).
[Yugabyte Cloud](http://yugabyte.com/cloud) is Yugabyte's fully-managed cloud service on AWS and GCP. You can [sign up](https://www.yugabyte.com/cloud/) for early access now.
A more detailed comparison of the above is available [here](https://www.yugabyte.com/platform/#compare-editions).
+## What are the trade-offs involved in using YugabyteDB?
+
+Trade-offs depend on the type of database used as baseline for comparison.
+
+### Distributed SQL
+
+Examples: Amazon Aurora, Google Cloud Spanner, CockroachDB, TiDB
+
+**Benefits of YugabyteDB**
+
+- Low-latency reads and high-throughput writes.
+- Cloud-neutral deployments with a Kubernetes-native database.
+- 100% Apache 2.0 open source even for enterprise features.
+
+**Trade-offs**
+
+- None
+
+Learn more: [What is Distributed SQL?](https://blog.yugabyte.com/what-is-distributed-sql/)
+
+### Monolithic SQL
+
+Examples: PostgreSQL, MySQL, Oracle, Amazon Aurora.
+
+**Benefits of YugabyteDB**
+
+- Scale write throughput linearly across multiple nodes and/or geographic regions.
+- Automatic failover and native repair.
+- 100% Apache 2.0 open source even for enterprise features.
+
+**Trade-offs**
+
+- Transactions and JOINs can now span multiple nodes, thereby increasing latency.
+
+Learn more: [Distributed PostgreSQL on a Google Spanner Architecture – Query Layer](https://blog.yugabyte.com/distributed-postgresql-on-a-google-spanner-architecture-query-layer/)
+
+### Traditional NewSQL
+
+Examples: Vitess, Citus
+
+**Benefits of YugabyteDB**
+
+- Distributed transactions across any number of nodes.
+- No single point of failure given all nodes are equal.
+- 100% Apache 2.0 open source even for enterprise features.
+
+**Trade-offs**
+
+- None
+
+Learn more: [Rise of Globally Distributed SQL Databases – Redefining Transactional Stores for Cloud Native Era](https://blog.yugabyte.com/rise-of-globally-distributed-sql-databases-redefining-transactional-stores-for-cloud-native-era/)
+
+### Transactional NoSQL
+
+Examples: MongoDB, Amazon DynamoDB, FoundationDB, Azure Cosmos DB.
+
+**Benefits of YugabyteDB**
+
+- Flexibility of SQL as query needs change in response to business changes.
+- Distributed transactions across any number of nodes.
+- Low latency, strongly consistent reads given that read-time quorum is avoided altogether.
+- 100% Apache 2.0 open source even for enterprise features.
+
+**Trade-offs**
+
+- None
+
+Learn more: [Why are NoSQL Databases Becoming Transactional?](https://blog.yugabyte.com/nosql-databases-becoming-transactional-mongodb-dynamodb-faunadb-cosmosdb/)
+
+### Eventually Consistent NoSQL
+
+Examples: Apache Cassandra, Couchbase.
+
+**Benefits of YugabyteDB**
+
+- Flexibility of SQL as query needs change in response to business changes.
+- Strongly consistent, zero data loss writes.
+- Strongly consistent as well as timeline-consistent reads without resorting to eventual consistency-related penalties such as read repairs and anti-entropy.
+- 100% Apache 2.0 open source even for enterprise features.
+
+**Trade-offs**
+
+- Extremely short unavailability during the leader election time for all shard leaders lost during a node failure or network partition.
+
+Learn more: [Apache Cassandra: The Truth Behind Tunable Consistency, Lightweight Transactions & Secondary Indexes](https://blog.yugabyte.com/apache-cassandra-lightweight-transactions-secondary-indexes-tunable-consistency/)
+
## How does YugabyteDB compare to other SQL and NoSQL databases?
See [YugabyteDB in Comparison](../../comparisons/)
diff --git a/docs/content/latest/introduction.md b/docs/content/latest/introduction.md
index 843e6a44463e..9af140c217d4 100644
--- a/docs/content/latest/introduction.md
+++ b/docs/content/latest/introduction.md
@@ -95,6 +95,22 @@ The YugabyteDB APIs are isolated and independent from one another today. This me
Trade-offs depend on the type of database used as baseline for comparison.
+### Distributed SQL
+
+Examples: Amazon Aurora, Google Cloud Spanner, CockroachDB, TiDB
+
+**Benefits of YugabyteDB**
+
+- Low-latency reads and high-throughput writes.
+- Cloud-neutral deployments with a Kubernetes-native database.
+- 100% Apache 2.0 open source even for enterprise features.
+
+**Trade-offs**
+
+- None
+
+Learn more: [What is Distributed SQL?](https://blog.yugabyte.com/what-is-distributed-sql/)
+
### Monolithic SQL
Examples: PostgreSQL, MySQL, Oracle, Amazon Aurora.
@@ -103,6 +119,7 @@ Examples: PostgreSQL, MySQL, Oracle, Amazon Aurora.
- Scale write throughput linearly across multiple nodes and/or geographic regions.
- Automatic failover and native repair.
+- 100% Apache 2.0 open source even for enterprise features.
**Trade-offs**
@@ -118,6 +135,7 @@ Examples: Vitess, Citus
- Distributed transactions across any number of nodes.
- No single point of failure given all nodes are equal.
+- 100% Apache 2.0 open source even for enterprise features.
**Trade-offs**
@@ -134,6 +152,7 @@ Examples: MongoDB, Amazon DynamoDB, FoundationDB, Azure Cosmos DB.
- Flexibility of SQL as query needs change in response to business changes.
- Distributed transactions across any number of nodes.
- Low latency, strongly consistent reads given that read-time quorum is avoided altogether.
+- 100% Apache 2.0 open source even for enterprise features.
**Trade-offs**
@@ -150,6 +169,7 @@ Examples: Apache Cassandra, Couchbase.
- Flexibility of SQL as query needs change in response to business changes.
- Strongly consistent, zero data loss writes.
- Strongly consistent as well as timeline-consistent reads without resorting to eventual consistency-related penalties such as read repairs and anti-entropy.
+- 100% Apache 2.0 open source even for enterprise features.
**Trade-offs**
diff --git a/docs/content/latest/quick-start/kubernetes/create-local-cluster.md b/docs/content/latest/quick-start/kubernetes/create-local-cluster.md
index c875ebd8820c..ff15dc11fca4 100644
--- a/docs/content/latest/quick-start/kubernetes/create-local-cluster.md
+++ b/docs/content/latest/quick-start/kubernetes/create-local-cluster.md
@@ -80,6 +80,6 @@ The **Masters** section highlights the YB-Master service along its corresponding
### 4.2 TServer status
-Click **See all nodes** to go to the **Tablet Servers** page where we can observe the one YB-TServer along with the time since it last connected to the YB-Master using regular heartbeats. Additionally, you can see that the **Load (Num Tablets)** is balanced across all available YB-TServer (tserver) services. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the `system_redis.redis` table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available YB-TServer services.
+Click **See all nodes** to go to the **Tablet Servers** page where we can observe the one YB-TServer along with the time since it last connected to the YB-Master using regular heartbeats. Additionally, you can see that the **Load (Num Tablets)** is balanced across all available YB-TServers. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the `system_redis.redis` table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available YB-TServers.
![tserver-list](/images/admin/master-tservers-list-kubernetes-rf1.png)
diff --git a/docs/content/latest/reference/configuration/yb-master.md b/docs/content/latest/reference/configuration/yb-master.md
index 448e5186ce04..f2e4018d9d03 100644
--- a/docs/content/latest/reference/configuration/yb-master.md
+++ b/docs/content/latest/reference/configuration/yb-master.md
@@ -13,7 +13,7 @@ isTocNested: 3
showAsideToc: true
---
-Use the `yb-master` binary and its options to configure the [YB-Master](../../../architecture/concepts/yb-master) service. The `yb-master` executable file is located in the `bin` directory of YugabyteDB home.
+Use the `yb-master` binary and its options to configure the [YB-Master](../../../architecture/concepts/yb-master) server. The `yb-master` executable file is located in the `bin` directory of YugabyteDB home.
## Syntax
@@ -67,7 +67,7 @@ Specifies a comma-separated list of all RPC addresses for `yb-master` consensus-
{{< note title="Note" >}}
-The number of comma-separated values should match the total number of YB-Master service (or the replication factor).
+The number of comma-separated values should match the total number of YB-Master server (or the replication factor).
{{< /note >}}
@@ -227,7 +227,7 @@ Default: Server automatically picks a valid default internally, typically 8.
#### --max_clock_skew_usec
-The expected maximum clock skew, in microseconds (µs), between any two services in your deployment.
+The expected maximum clock skew, in microseconds (µs), between any two servers in your deployment.
Default: `50000` (50,000 µs = 50ms)
@@ -247,7 +247,7 @@ Settings related to managing geo-distributed clusters and Raft consensus.
The maximum heartbeat periods that the leader can fail to heartbeat in before the leader is considered to be failed. The total failure timeout, in milliseconds, is [`--raft_heartbeat_interval_ms`](#raft-heartbeat-interval-ms) multiplied by `--leader_failure_max_missed_heartbeat_periods`.
-For read replica clusters, set the value to `10` on both YB-Master and YB-TServer services. Because the the data is globally replicated, RPC latencies are higher. Use this flag to increase the failure detection interval in such a higher RPC latency deployment.
+For read replica clusters, set the value to `10` on both YB-Master and YB-TServer servers. Because the the data is globally replicated, RPC latencies are higher. Use this flag to increase the failure detection interval in such a higher RPC latency deployment.
Default: `6`
@@ -307,7 +307,7 @@ Default: `false`
#### --use_node_to_node_encryption
-Enable server-server, or node-to-node, encryption between YugabyteDB YB-Master and YB-TServer services in a cluster or universe. To work properly, all YB-Master services must also have their [`--use_node_to_node_encryption`](../yb-master/#use-node-to-node-encryption) setting enabled. When enabled, then [`--allow_insecure_connections`](#allow-insecure-connections) must be disabled.
+Enable server-server, or node-to-node, encryption between YugabyteDB YB-Master and YB-TServer servers in a cluster or universe. To work properly, all YB-Master servers must also have their [`--use_node_to_node_encryption`](../yb-master/#use-node-to-node-encryption) setting enabled. When enabled, then [`--allow_insecure_connections`](#allow-insecure-connections) must be disabled.
Default: `false`
@@ -337,7 +337,7 @@ The Admin UI for yb-master is available at http://localhost:7000.
### Home
-Home page of the YB-Master service that gives a high level overview of the cluster. Note all YB-Master services in a cluster show identical information.
+Home page of the YB-Master server that gives a high level overview of the cluster. Note all YB-Master servers in a cluster show identical information.
![master-home](/images/admin/master-home-binary-with-tables.png)
@@ -349,7 +349,7 @@ List of tables present in the cluster.
### Tablet servers
-List of all nodes (aka YB-TServer services) present in the cluster.
+List of all nodes (aka YB-TServer servers) present in the cluster.
![master-tservers](/images/admin/master-tservers-list-binary-with-tablets.png)
diff --git a/docs/content/latest/reference/configuration/yb-tserver.md b/docs/content/latest/reference/configuration/yb-tserver.md
index c77a69aed269..72b1121246ee 100644
--- a/docs/content/latest/reference/configuration/yb-tserver.md
+++ b/docs/content/latest/reference/configuration/yb-tserver.md
@@ -13,7 +13,7 @@ isTocNested: 3
showAsideToc: true
---
-Use the `yb-tserver` binary and its options to configure the [YB-TServer](../../../architecture/concepts/yb-tserver/) service. The `yb-tserver` executable file is located in the `bin` directory of YugabyteDB home.
+Use the `yb-tserver` binary and its options to configure the [YB-TServer](../../../architecture/concepts/yb-tserver/) server. The `yb-tserver` executable file is located in the `bin` directory of YugabyteDB home.
## Syntax
@@ -83,7 +83,7 @@ Comma-separated list of all the `yb-master` RPC addresses. Mandatory.
{{< note title="Note" >}}
-The number of comma-separated values should match the total number of YB-Master services (or the replication factor).
+The number of comma-separated values should match the total number of YB-Master servers (or the replication factor).
{{< /note >}}
@@ -197,7 +197,7 @@ Settings related to managing geo-distributed clusters and Raft consensus.
The maximum heartbeat periods that the leader can fail to heartbeat in before the leader is considered to be failed. The total failure timeout, in milliseconds (ms), is [`--raft_heartbeat_interval_ms`](#raft-heartbeat-interval-ms) multiplied by `--leader_failure_max_missed_heartbeat_periods`.
-For read replica clusters, set the value to `10` on both YB-Master and YB-TServer services. Because the the data is globally replicated, RPC latencies are higher. Use this flag to increase the failure detection interval in such a higher RPC latency deployment.
+For read replica clusters, set the value to `10` on both YB-Master and YB-TServer servers. Because the the data is globally replicated, RPC latencies are higher. Use this flag to increase the failure detection interval in such a higher RPC latency deployment.
Default: `6`
@@ -463,7 +463,7 @@ Default: `false`
#### --use_node_to_node_encryption
-Enable server-server, or node-to-node, encryption between YugabyteDB YB-Master and YB-TServer services in a cluster or universe. To work properly, all YB-Master services must also have their [`--use_node_to_node_encryption`](../yb-master/#use-node-to-node-encryption) setting enabled. When enabled, then [`--allow_insecure_connections`](#allow-insecure-connections) must be disabled.
+Enable server-server, or node-to-node, encryption between YugabyteDB YB-Master and YB-TServer servers in a cluster or universe. To work properly, all YB-Master servers must also have their [`--use_node_to_node_encryption`](../yb-master/#use-node-to-node-encryption) setting enabled. When enabled, then [`--allow_insecure_connections`](#allow-insecure-connections) must be disabled.
Default: `false`
diff --git a/docs/content/latest/secure/tls-encryption/server-to-server.md b/docs/content/latest/secure/tls-encryption/server-to-server.md
index b6e3f9ad5b64..c5b4abddbc5e 100644
--- a/docs/content/latest/secure/tls-encryption/server-to-server.md
+++ b/docs/content/latest/secure/tls-encryption/server-to-server.md
@@ -15,7 +15,7 @@ isTocNested: true
showAsideToc: true
---
-To enable server-server (or node-to-node) encryption, start the YB-Master and YB-TServer services using the appropriate configuration options described here.
+To enable server-server (or node-to-node) encryption, start the YB-Master and YB-TServer servers using the appropriate configuration options described here.
Configuration option | Service | Description |
-------------------------------|--------------------------|------------------------------|
@@ -23,7 +23,7 @@ Configuration option | Service | Description
`allow_insecure_connections` | YB-Master only | Optional, defaults to `true`. Set to `false` to disallow any process with unencrypted communication from joining this cluster. Default value is `true`. Note that this flag requires the `use_node_to_node_encryption` to be enabled. |
`certs_dir` | YB-Master, YB-TServer | Optional. This directory should contain the configuration that was prepared in the a step for this node to perform encrypted communication with the other nodes. Default value for YB-Masters is `/yb-data/master/data/certs` and for YB-TServers this location is `/yb-data/tserver/data/certs` |
-## Start the master process
+## Start the YB-Master server
You can enable access control by starting the `yb-master` processes minimally with the `--use_node_to_node_encryption=true` configuration option as described above. Your command should look similar to that shown below:
@@ -38,9 +38,9 @@ bin/yb-master \
You can read more about bringing up the YB-Masters for a deployment in the section on [manual deployment of a YugabyteDB cluster](../../../deploy/manual-deployment/start-masters/).
-## Start the YB-TServer service
+## Start the YB-TServer server
-You can enable access control by starting the `yb-tserver` service minimally with the `--use_node_to_node_encryption=true` flag as described above. Your command should look similar to that shown below:
+You can enable access control by starting the `yb-tserver` server minimally with the `--use_node_to_node_encryption=true` flag as described above. Your command should look similar to that shown below:
```
bin/yb-tserver \
diff --git a/docs/content/latest/troubleshoot/nodes/check-logs.md b/docs/content/latest/troubleshoot/nodes/check-logs.md
index ce819d0bfb3e..fbb86b241419 100644
--- a/docs/content/latest/troubleshoot/nodes/check-logs.md
+++ b/docs/content/latest/troubleshoot/nodes/check-logs.md
@@ -23,8 +23,7 @@ In the sections below, the YugabyteDB `yugabyte-data` directory is represented b
## YB-Master logs
-YB-Master services manage system meta-data, such as namespaces (databases or keyspaces), tables, and types: they handle DDL statements (for example, `CREATE TABLE`, `DROP TABLE`, `ALTER TABLE` KEYSPACE/TYPE`). YB-Master services also manage users, permissions, and coordinate background operations, such as load balancing.
-Master logs can be found at:
+The YB-Master service manages system metadata, such as namespaces (databases or keyspaces) and tables. It also handles DDL statements such as `CREATE TABLE`, `DROP TABLE`, `ALTER TABLE` / `KEYSPACE/TYPE`. It also manages users, permissions, and coordinate background operations, such as load balancing. Its logs can be found at:
```sh
$ cd /disk1/yb-data/master/logs/
@@ -34,8 +33,7 @@ Logs are organized by error severity: `FATAL`, `ERROR`, `WARNING`, `INFO`. In ca
## YB-TServer logs
-YB-TServer services perform the actual I/O for end-user requests: they handle DML statements (for example, `INSERT`, `UPDATE`, `DELETE`, and `SELECT`) and Redis commands.
-YB-TServer logs can be found at:
+The YB-TServer service performs the actual I/O for end-user requests. It handles DML statements such as `INSERT`, `UPDATE`, `DELETE`, and `SELECT`. Its logs can be found at:
```sh
$ cd /disk1/yb-data/tserver/logs/
diff --git a/docs/layouts/partials/footer.html b/docs/layouts/partials/footer.html
index f6b31a5c6bb9..63f3ce0e4669 100644
--- a/docs/layouts/partials/footer.html
+++ b/docs/layouts/partials/footer.html
@@ -37,7 +37,7 @@
Privacy Policy -->
- Copyright © 2017-2019 Yugabyte, Inc. All rights reserved.
+ Copyright © 2017-2020 Yugabyte, Inc. All rights reserved.