diff --git a/docs/01_nodeos/03_plugins/state_history_plugin/10_how-to-fast-start-without-old-history.md b/docs/01_nodeos/03_plugins/state_history_plugin/10_how-to-fast-start-without-old-history.md index 52ba3906700..4e1f5344830 100644 --- a/docs/01_nodeos/03_plugins/state_history_plugin/10_how-to-fast-start-without-old-history.md +++ b/docs/01_nodeos/03_plugins/state_history_plugin/10_how-to-fast-start-without-old-history.md @@ -24,7 +24,7 @@ This procedure records the current chain state and future history, without previ 4. Look for `Placing initial state in block n` in the log, where n is the start block number. -5. Start a filler with `--fpg-create` (if PostgreSQL), `--fill-skip-to n`, and `--fill-trim`. Replace `n` with the value above. +5. If using a database filler, start the filler with `--fpg-create` (if PostgreSQL), `--fill-skip-to n`, and `--fill-trim`. Replace `n` with the value above. 6. Do not stop `nodeos` until it has received at least 1 block from the network, or it won't be able to restart. @@ -36,7 +36,7 @@ If `nodeos` fails to receive blocks from the network, then try the above using ` | Either use a firewall to block access to your `http-server-address`, or change it to `localhost:8888` to disable remote access. [[info]] -| Whenever you run a filler after this point, use the `--fill-trim` option. Only use `--fpg-create` and `--fill-skip-to` the first time. +| If you run a database filler after this point, use the `--fill-trim` option when restarting. Only use `--fpg-create` and `--fill-skip-to` the first time. [[info]] -| On large chains, this procedure creates a delta record that is too large for javascript processes to handle. 64-bit C++ processes can handle the large record. `fill-pg` and `fill-lmdb` break up the large record into smaller records when filling databases. +| On large chains, this procedure creates a delta record that is too large for javascript processes to handle. 64-bit C++ processes can handle the large record. If using a database filler, `fill-pg` and `fill-lmdb` break up the large record into smaller records when filling databases.