Skip to content

Commit

Permalink
Remove /bin from toolkit commands (pingcap#8473)
Browse files Browse the repository at this point in the history
  • Loading branch information
Oreoxmt authored May 6, 2022
1 parent e25b4bb commit 8058039
Show file tree
Hide file tree
Showing 9 changed files with 35 additions and 33 deletions.
4 changes: 3 additions & 1 deletion deploy-monitoring-services.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,8 +182,10 @@ url = https://grafana.net

Start the Grafana service:

{{< copyable "shell-regular" >}}

```bash
$ ./bin/grafana-server \
./bin/grafana-server \
--config="./conf/grafana.ini" &
```

Expand Down
8 changes: 4 additions & 4 deletions dm/deploy-a-dm-cluster-using-binary.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ You can configure DM-master by using [command-line parameters](#dm-master-comman
The following is the description of DM-master command-line parameters:

```bash
./bin/dm-master --help
./dm-master --help
```

```
Expand Down Expand Up @@ -131,7 +131,7 @@ The following is the configuration file of DM-master. It is recommended that you
{{< copyable "shell-regular" >}}

```bash
./bin/dm-master -config conf/dm-master1.toml
./dm-master -config conf/dm-master1.toml
```

> **Note:**
Expand All @@ -151,7 +151,7 @@ The following is the description of the DM-worker command-line parameters:
{{< copyable "shell-regular" >}}

```bash
./bin/dm-worker --help
./dm-worker --help
```

```
Expand Down Expand Up @@ -207,7 +207,7 @@ The following is the DM-worker configuration file. It is recommended that you co
{{< copyable "shell-regular" >}}

```bash
./bin/dm-worker -config conf/dm-worker1.toml
./dm-worker -config conf/dm-worker1.toml
```

3. For DM-worker2, change `name` in the configuration file to `worker2`. Then repeat Step 2.
Expand Down
6 changes: 3 additions & 3 deletions dm/quick-start-create-task.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ For safety reasons, it is recommended to configure and use encrypted passwords.
{{< copyable "shell-regular" >}}

```bash
./bin/dmctl encrypt "123456"
./dmctl encrypt "123456"
```

```
Expand Down Expand Up @@ -137,7 +137,7 @@ To load the data source configurations of MySQL1 into the DM cluster using dmctl
{{< copyable "shell-regular" >}}

```bash
./bin/dmctl --master-addr=127.0.0.1:8261 operate-source create conf/source1.yaml
./dmctl --master-addr=127.0.0.1:8261 operate-source create conf/source1.yaml
```

For MySQL2, replace the configuration file in the above command with that of MySQL2.
Expand Down Expand Up @@ -198,7 +198,7 @@ Now, suppose that you need to migrate these sharded tables to the `db_target.t_t
{{< copyable "shell-regular" >}}

```bash
./bin/dmctl --master-addr 127.0.0.1:8261 start-task conf/task.yaml
./dmctl --master-addr 127.0.0.1:8261 start-task conf/task.yaml
```

```
Expand Down
2 changes: 1 addition & 1 deletion get-started-with-tidb-lightning.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ First, use [`dumpling`](/dumpling-overview.md) to export data from MySQL:
{{< copyable "shell-regular" >}}

```sh
./bin/dumpling -h 127.0.0.1 -P 3306 -u root -t 16 -F 256MB -B test -f 'test.t[12]' -o /data/my_database/
./dumpling -h 127.0.0.1 -P 3306 -u root -t 16 -F 256MB -B test -f 'test.t[12]' -o /data/my_database/
```

In the above command:
Expand Down
2 changes: 1 addition & 1 deletion sync-diff-inspector/sync-diff-inspector-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ Run the following command:
{{< copyable "shell-regular" >}}

```bash
./bin/sync_diff_inspector --config=./config.toml
./sync_diff_inspector --config=./config.toml
```

This command outputs a check report `summary.txt` in the `output-dir` of `config.toml` and the log `sync_diff.log`. In the `output-dir`, a folder named by the hash value of the `config. toml` file is also generated. This folder includes the checkpoint node information of breakpoints and the SQL file generated when the data is inconsistent.
Expand Down
8 changes: 4 additions & 4 deletions tidb-binlog/deploy-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ The following part shows how to use Pump and Drainer based on the nodes above.

1. Deploy Pump using the binary.

- To view the command line parameters of Pump, execute `./bin/pump -help`:
- To view the command line parameters of Pump, execute `./pump -help`:

```bash
Usage of Pump:
Expand Down Expand Up @@ -173,14 +173,14 @@ The following part shows how to use Pump and Drainer based on the nodes above.
{{< copyable "shell-regular" >}}

```bash
./bin/pump -config pump.toml
./pump -config pump.toml
```

If the command line parameters is the same with the configuration file parameters, the values of command line parameters are used.

2. Deploy Drainer using binary.

- To view the command line parameters of Drainer, execute `./bin/drainer -help`:
- To view the command line parameters of Drainer, execute `./drainer -help`:

```bash
Usage of Drainer:
Expand Down Expand Up @@ -391,7 +391,7 @@ The following part shows how to use Pump and Drainer based on the nodes above.
{{< copyable "shell-regular" >}}

```bash
./bin/drainer -config drainer.toml -initial-commit-ts {initial-commit-ts}
./drainer -config drainer.toml -initial-commit-ts {initial-commit-ts}
```

If the command line parameter and the configuration file parameter are the same, the parameter value in the command line is used.
Expand Down
34 changes: 17 additions & 17 deletions tidb-binlog/get-started-with-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ Start all the services using:
```bash
./bin/pd-server --config=pd.toml &>pd.out &
./bin/tikv-server --config=tikv.toml &>tikv.out &
./bin/pump --config=pump.toml &>pump.out &
./pump --config=pump.toml &>pump.out &
sleep 3
./bin/tidb-server --config=tidb.toml &>tidb.out &
```
Expand All @@ -143,7 +143,7 @@ Expected output:
[1] 20935
[kolbe@localhost tidb-latest-linux-amd64]$ ./bin/tikv-server --config=tikv.toml &>tikv.out &
[2] 20944
[kolbe@localhost tidb-latest-linux-amd64]$ ./bin/pump --config=pump.toml &>pump.out &
[kolbe@localhost tidb-latest-linux-amd64]$ ./pump --config=pump.toml &>pump.out &
[3] 21050
[kolbe@localhost tidb-latest-linux-amd64]$ sleep 3
[kolbe@localhost tidb-latest-linux-amd64]$ ./bin/tidb-server --config=tidb.toml &>tidb.out &
Expand All @@ -156,7 +156,7 @@ If you execute `jobs`, you should see a list of running daemons:
[kolbe@localhost tidb-latest-linux-amd64]$ jobs
[1] Running ./bin/pd-server --config=pd.toml &>pd.out &
[2] Running ./bin/tikv-server --config=tikv.toml &>tikv.out &
[3]- Running ./bin/pump --config=pump.toml &>pump.out &
[3]- Running ./pump --config=pump.toml &>pump.out &
[4]+ Running ./bin/tidb-server --config=tidb.toml &>tidb.out &
```

Expand Down Expand Up @@ -191,7 +191,7 @@ Start `drainer` using:

```bash
sudo systemctl start mariadb
./bin/drainer --config=drainer.toml &>drainer.out &
./drainer --config=drainer.toml &>drainer.out &
```

If you are using an operating system that makes it easier to install MySQL server, that's also OK. Just make sure it's listening on port 3306 and that you can either connect to it as user "root" with an empty password, or adjust drainer.toml as necessary.
Expand Down Expand Up @@ -331,32 +331,32 @@ Information about Pumps and Drainers that have joined the cluster is stored in P
Use `binlogctl` to get a view of the current status of Pumps and Drainers in the cluster:

```bash
./bin/binlogctl -cmd drainers
./bin/binlogctl -cmd pumps
./binlogctl -cmd drainers
./binlogctl -cmd pumps
```

Expected output:

```
[kolbe@localhost tidb-latest-linux-amd64]$ ./bin/binlogctl -cmd drainers
[kolbe@localhost tidb-latest-linux-amd64]$ ./binlogctl -cmd drainers
[2019/04/11 17:44:10.861 -04:00] [INFO] [nodes.go:47] ["query node"] [type=drainer] [node="{NodeID: localhost.localdomain:8249, Addr: 192.168.236.128:8249, State: online, MaxCommitTS: 407638907719778305, UpdateTime: 2019-04-11 17:44:10 -0400 EDT}"]
[kolbe@localhost tidb-latest-linux-amd64]$ ./bin/binlogctl -cmd pumps
[kolbe@localhost tidb-latest-linux-amd64]$ ./binlogctl -cmd pumps
[2019/04/11 17:44:13.904 -04:00] [INFO] [nodes.go:47] ["query node"] [type=pump] [node="{NodeID: localhost.localdomain:8250, Addr: 192.168.236.128:8250, State: online, MaxCommitTS: 407638914024079361, UpdateTime: 2019-04-11 17:44:13 -0400 EDT}"]
```

If you kill a Drainer, the cluster puts it in the "paused" state, which means that the cluster expects it to rejoin:

```bash
pkill drainer
./bin/binlogctl -cmd drainers
./binlogctl -cmd drainers
```

Expected output:

```
[kolbe@localhost tidb-latest-linux-amd64]$ pkill drainer
[kolbe@localhost tidb-latest-linux-amd64]$ ./bin/binlogctl -cmd drainers
[kolbe@localhost tidb-latest-linux-amd64]$ ./binlogctl -cmd drainers
[2019/04/11 17:44:22.640 -04:00] [INFO] [nodes.go:47] ["query node"] [type=drainer] [node="{NodeID: localhost.localdomain:8249, Addr: 192.168.236.128:8249, State: paused, MaxCommitTS: 407638915597467649, UpdateTime: 2019-04-11 17:44:18 -0400 EDT}"]
```

Expand All @@ -369,15 +369,15 @@ There are 3 solutions to this issue:
- Stop Drainer using `binlogctl` instead of killing the process:

```
./bin/binlogctl --pd-urls=http://127.0.0.1:2379 --cmd=drainers
./bin/binlogctl --pd-urls=http://127.0.0.1:2379 --cmd=offline-drainer --node-id=localhost.localdomain:8249
./binlogctl --pd-urls=http://127.0.0.1:2379 --cmd=drainers
./binlogctl --pd-urls=http://127.0.0.1:2379 --cmd=offline-drainer --node-id=localhost.localdomain:8249
```
- Start Drainer _before_ starting Pump.
- Use `binlogctl` after starting PD (but before starting Drainer and Pump) to update the state of the paused Drainer:
```
./bin/binlogctl --pd-urls=http://127.0.0.1:2379 --cmd=update-drainer --node-id=localhost.localdomain:8249 --state=offline
./binlogctl --pd-urls=http://127.0.0.1:2379 --cmd=update-drainer --node-id=localhost.localdomain:8249 --state=offline
```
## Cleanup
Expand All @@ -393,8 +393,8 @@ Expected output:
```
kolbe@localhost tidb-latest-linux-amd64]$ for p in tidb-server drainer pump tikv-server pd-server; do pkill "$p"; sleep 1; done
[4]- Done ./bin/tidb-server --config=tidb.toml &>tidb.out
[5]+ Done ./bin/drainer --config=drainer.toml &>drainer.out
[3]+ Done ./bin/pump --config=pump.toml &>pump.out
[5]+ Done ./drainer --config=drainer.toml &>drainer.out
[3]+ Done ./pump --config=pump.toml &>pump.out
[2]+ Done ./bin/tikv-server --config=tikv.toml &>tikv.out
[1]+ Done ./bin/pd-server --config=pd.toml &>pd.out
```
Expand All @@ -404,9 +404,9 @@ If you wish to restart the cluster after all services exit, use the same command
```bash
./bin/pd-server --config=pd.toml &>pd.out &
./bin/tikv-server --config=tikv.toml &>tikv.out &
./bin/drainer --config=drainer.toml &>drainer.out &
./drainer --config=drainer.toml &>drainer.out &
sleep 3
./bin/pump --config=pump.toml &>pump.out &
./pump --config=pump.toml &>pump.out &
sleep 3
./bin/tidb-server --config=tidb.toml &>tidb.out &
```
Expand Down
2 changes: 1 addition & 1 deletion tidb-binlog/tidb-binlog-reparo.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ password = ""
### Start example

```
./bin/reparo -config reparo.toml
./reparo -config reparo.toml
```

> **Note:**
Expand Down
2 changes: 1 addition & 1 deletion tidb-lightning/deploy-tidb-lightning.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ With the default replica count of 3, this means the total free space should be a
Use the [`dumpling` tool](/dumpling-overview.md) to export data from MySQL by using the following command:

```sh
./bin/dumpling -h 127.0.0.1 -P 3306 -u root -t 16 -F 256MB -B test -f 'test.t[12]' -o /data/my_database/
./dumpling -h 127.0.0.1 -P 3306 -u root -t 16 -F 256MB -B test -f 'test.t[12]' -o /data/my_database/
```

In this command,
Expand Down

0 comments on commit 8058039

Please sign in to comment.