You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.
Hello,
polkadot binary downloaded from parity github, version 0.9.1(--wasm-execution Compiled is enabled)
Just realized during tests, what node fails everytime in case if we filter inbound tcp/30333
Logs bellow:
2021-05-17 08:56:00 ✨ Imported #7508239 (0x69c3…4e6e)
2021-05-17 08:56:01 Subsystem approval-voting-subsystem appears unresponsive.
2021-05-17 08:56:01 Essential task `overseer` failed. Shutting down service.
2021-05-17 08:56:01 error receiving message from subsystem context for job job="CandidateBackingJob" err=Context("No more messages in rx queue to process")
2021-05-17 08:56:01 err=Subsystem(Context("No more messages in rx queue to process"))
2021-05-17 08:56:01 error receiving message from subsystem context for job job="CandidateSelectionJob" err=Context("No more messages in rx queue to process")
2021-05-17 08:56:01 subsystem exited with error subsystem="chain-api-subsystem" err=FromOrigin { origin: "chain-api", source: Context("No more messages in rx queue to process") }
2021-05-17 08:56:01 subsystem exited with error subsystem="candidate-validation-subsystem" err=FromOrigin { origin: "candidate-validation", source: Context("No more messages in rx queue to process") }
2021-05-17 08:56:01 subsystem exited with error subsystem="availability-distribution-subsystem" err=FromOrigin { origin: "availability-distribution", source: IncomingMessageChannel(Context("No more messages in rx queue to process")) }
2021-05-17 08:56:01 subsystem exited with error subsystem="availability-recovery-subsystem" err=FromOrigin { origin: "availability-recovery", source: Context("No more messages in rx queue to process") }
2021-05-17 08:56:01 error receiving message from subsystem context for job job="BitfieldSigningJob" err=Context("No more messages in rx queue to process")
2021-05-17 08:56:01 error receiving message from subsystem context for job job="ProvisioningJob" err=Context("No more messages in rx queue to process")
2021-05-17 08:56:01 subsystem exited with error subsystem="statement-distribution-subsystem" err=FromOrigin { origin: "statement-distribution", source: SubsystemReceive(Context("No more messages in rx queue to process")) }
2021-05-17 08:56:01 Shutting down Network Bridge due to error err=Context("No more messages in rx queue to process")
2021-05-17 08:56:01 subsystem exited with error subsystem="network-bridge-subsystem" err=FromOrigin { origin: "network-bridge", source: Context("Received SubsystemError from overseer: Context(\"No more messages in rx queue to process\")") }
2021-05-17 08:56:01 subsystem exited with error subsystem="runtime-api-subsystem" err=Context("No more messages in rx queue to process")
Error:
0: Other: Essential task failed.
Location:
src/main.rs:25
The text was updated successfully, but these errors were encountered:
Can you by chance provide some preceeding logs?
Do you have prometheus setup /w a dashboard so we can see some additional metrics to navigate this issue to its source?
Logs of parachain=trace might help https://wiki.polkadot.network/docs/en/build-node-management
Also prometheus datapoints 5mins before it happens up until the node stops of:parachain_overseer_signals_receivedparachain_overseer_signals_sentparachain_subsystem_unbounded_receivedparachain_subsystem_unbounded_sentparachain_subsystem_bounded_receivedparachain_subsystem_bounded_sent (maybe parachain_messages_relayed_total) preferably on a per subsystem level as screenshot of grafana or whatever visualization you are going to use.
I am aware that this is quite a bit, but that should be sufficient to pinpoint it quickly.
Hello,
polkadot binary downloaded from parity github, version 0.9.1(--wasm-execution Compiled is enabled)
Just realized during tests, what node fails everytime in case if we filter inbound tcp/30333
Logs bellow:
The text was updated successfully, but these errors were encountered: