-
Notifications
You must be signed in to change notification settings - Fork 300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A panic in the worker leaves it unable to process any subsequent requests for a period of time #166
Comments
Hmm, this is certainly a problem we should fix. After a quick look it seems that re-creating the entire wasm instance and re-setting the JS glue isn't really possible with Moving away from |
@zebp thanks so much for taking a look at this! This has been one of the significant issues we've been hitting with Workers in building our product. Do you know if a potential fix for this issue is considered to be included in the team's nearest roadmap? Thanks again. |
Thankfully, this seems to have been fixed when we updated I have a production worker with a |
I'm still seeng this issue. After a queue event leading to the following exception all executions following this one no longer log anything:
I'm using at least the following dependencies: chrono = { version = "0.4", features = ["wasmbind", "serde"] }
console_error_panic_hook = { version = "0.1" }
getrandom = { version = "0.2", features = ["js"] }
worker = { version = "0.0.18", features = ["queue"] } According to my name = "wasm-bindgen"
version = "0.2.86"
name = "wasm-bindgen-backend"
version = "0.2.86"
name = "wasm-bindgen-futures"
version = "0.4.36"
name = "wasm-bindgen-macro"
version = "0.2.86"
name = "wasm-bindgen-macro-support"
version = "0.2.86"
name = "wasm-bindgen-shared"
version = "0.2.86" Any thoughts on why my worker still seems to turn non-functional? It led to issue #374 but after instrumenting the worker in line with https://blog.cloudflare.com/wasm-coredumps/ it seems to have the behaviour as described in this issue. |
When a Rust-based worker panics, it leaves the worker dangling and unable to process subsequent requests from the same client (client A), which manifests itself in consistent 500s returned by Cloudflare. Requests from another client (client B) work fine, which suggests the sticky routing mechanism causes the requests sent by the client A to be routed to a dangling non-functional instance of the worker.
I created a minimal test case in this repository with the exact steps to reproduce.
Here's the relevant worker exhibiting this behaviour.
I am not sure if this is a bug in
workers-rs
– perhaps not – but hopefully it can be routed to the appropriate team.Thank you!
The text was updated successfully, but these errors were encountered: