You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.
I have a backup server that has Polkadot and Kusama instances running as services under the user polkadot. I opted to create a second Kusama instance on this server amending the listen and RPC ports as well as amending the base-path to /home/polkadot/.local/share/polkadotb. The instance was started using a systemd service using the user same user as the other instances (polkadot).
When the new Kusama instance was almost synced I received the following error:
Jun 30 17:20:55 2021-06-30 17:20:55 ✨ Imported #8142963 (0x5d46…f729)
Jun 30 17:20:56 2021-06-30 17:20:56 Subsystem approval-voting-subsystem appears unresponsive.
Jun 30 17:20:56 2021-06-30 17:20:56 error receiving message from subsystem context for job job="CandidateBackingJob" err=Context("Signal channel is terminated and empty.")
Jun 30 17:20:56 2021-06-30 17:20:56 err=Subsystem(Context("Signal channel is terminated and empty."))
Jun 30 17:20:56 2021-06-30 17:20:56 subsystem exited with error subsystem="candidate-validation-subsystem" err=FromOrigin { origin: "candidate-validation", source: Context("Signal channel is terminated and empty.") }
Jun 30 17:20:56 2021-06-30 17:20:56 subsystem exited with error subsystem="chain-api-subsystem" err=FromOrigin { origin: "chain-api", source: Context("Signal channel is terminated and empty.") }
Jun 30 17:20:56 2021-06-30 17:20:56 subsystem exited with error subsystem="statement-distribution-subsystem" err=FromOrigin { origin: "statement-distribution", source: SubsystemReceive(Context("Signal channel is terminated and empty.")) }
Jun 30 17:20:56 2021-06-30 17:20:56 subsystem exited with error subsystem="availability-recovery-subsystem" err=FromOrigin { origin: "availability-recovery", source: Context("Signal channel is terminated and empty.") }
Jun 30 17:20:56 2021-06-30 17:20:56 error receiving message from subsystem context for job job="ProvisioningJob" err=Context("Signal channel is terminated and empty.")
Jun 30 17:20:56 2021-06-30 17:20:56 error receiving message from subsystem context for job job="BitfieldSigningJob" err=Context("Signal channel is terminated and empty.")
Jun 30 17:20:56 2021-06-30 17:20:56 subsystem exited with error subsystem="availability-distribution-subsystem" err=FromOrigin { origin: "availability-distribution", source: IncomingMessageChannel(Context("Signal channel is terminated and empty.")) }
Jun 30 17:20:56 2021-06-30 17:20:56 error receiving message from subsystem context: Context("Signal channel is terminated and empty.") err=Context("Signal channel is terminated and empty.")
Jun 30 17:20:56 2021-06-30 17:20:56 Essential task `overseer` failed. Shutting down service.
Jun 30 17:20:56 2021-06-30 17:20:56 Shutting down Network Bridge due to error err=Context("Signal channel is terminated and empty.")
Jun 30 17:20:56 2021-06-30 17:20:56 subsystem exited with error subsystem="network-bridge-subsystem" err=FromOrigin { origin: "network-bridge", source: Context("Received SubsystemError from overseer: Context(\"Signal channel is terminated and empty.\")") }
Jun 30 17:20:56 2021-06-30 17:20:56 subsystem exited with error subsystem="runtime-api-subsystem" err=Context("Signal channel is terminated and empty.")
Each subsequent restart of the service produced a similar error and the instance would not stay online (judging by telemetry).
As a work around I created a new user polkadotb amended the systemd service to run as this user and removed the base-path flag. Additionally I restored this instance via db snapshot instead of syncing from genesis. The instance crashed upon first sync but is operating in a stable manner since.
Regards,
Paradox
The text was updated successfully, but these errors were encountered:
Good day,
I have a backup server that has Polkadot and Kusama instances running as services under the user polkadot. I opted to create a second Kusama instance on this server amending the listen and RPC ports as well as amending the base-path to /home/polkadot/.local/share/polkadotb. The instance was started using a systemd service using the user same user as the other instances (polkadot).
When the new Kusama instance was almost synced I received the following error:
Each subsequent restart of the service produced a similar error and the instance would not stay online (judging by telemetry).
As a work around I created a new user polkadotb amended the systemd service to run as this user and removed the base-path flag. Additionally I restored this instance via db snapshot instead of syncing from genesis. The instance crashed upon first sync but is operating in a stable manner since.
Regards,
Paradox
The text was updated successfully, but these errors were encountered: