-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Oldest Unacknowledged Message > 2days #73
Comments
I don't believe the cause of this is anything the client itself is doing. @kir-titievsky @jonparrott any ideas? |
@kir-titievsky just a friendly ping |
please submit a support case with subscription id, if you have the option.
Otherwise, please send a note to cloud-pubsub@google.com with same. This
does not look like a client library issue at first glance.
…On Wed, Mar 7, 2018 at 11:47 AM Dave Gramlich ***@***.***> wrote:
@kir-titievsky <https://github.com/kir-titievsky> just a friendly ping
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#73 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ARrMFqiTGfJ5xUI8_3uwVovjTPrqMUINks5tcA8JgaJpZM4SLE9W>
.
--
Kir Titievsky | Product Manager | Google Cloud Pub/Sub
<https://cloud.google.com/pubsub/overview>
|
Given the above, I'll close this for now. We can re-open and investigate any client fixes if any results come back from a review by the Pub/Sub team. |
I'm experiencing a similar problem in 0.18.0 that I don't understand. The Undelivered Message count in Stack Driver continues to grow even though the subscriber is ack() messages. It's a single publisher, single subscriber configuration. But pulling the messages shows 0 items:
I tried deleting the subscription as a last resort, but the behavior repeats. Should I open a new issue? |
Yes please. Knowing the project, subscription and timing details is
critical to debug something like this.
…On Fri, Aug 10, 2018 at 11:39 AM James Holcomb ***@***.***> wrote:
I'm experiencing a similar problem in 0.18.0 that I don't understand. The
Undelivered Message count in Stack Driver continues to grow even though the
subscriber is ack() messages. It's a single publisher, single subscriber
configuration.
I tried deleting the subscription as a last resort, but the behavior
repeats.
Should I open a new issue?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#73 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ARrMFuf8SiMQi8R38_U21EyuhHGHI2Afks5uPakUgaJpZM4SLE9W>
.
--
Kir Titievsky | Product Manager | Google Cloud Pub/Sub
<https://cloud.google.com/pubsub/overview>
|
Any updates on this? We are having this issue at my company as well. |
@mau21mau This is the type of issue that can have many different causes. In order to investigate, it is usually necessary to have more detailed information about the project, topic, and subscription. Please create a Cloud support request from the Cloud console at https://console.cloud.google.com/support |
Environment details
Steps to reproduce
I'm running a single nodejs subscriber which reads messages from one pubsub topic in streaming pull mode at a rate of ~500 messages/s. Messages are usually being acked within below a second.
I'm reading from the same topic on a different subscription with a Dataflow-based subscriber which still uses the non-streaming interface and which doesn't show the issue described here, so I have a benchmark to compare my node-subscriber with.
The node subscriber works quite reliable, except that, from time to time, a few messages (below 10), seem to get stuck in Pubsub for a very long amount of time, often longer than a day.
I've seen values as high as 24h, 29h or even 54h (<- and counting) for the "Oldest Unacknowledged Message" graph in stackdriver.
Eventually, those few messages seem to get consumed. However I have not found a reliable way of triggering that pull, it just happens eventually. Restarting the pods that consume the subscription doesn't seem to help.
I'm not quite sure if this could be related to #11, because in my case, restarting the pods doesn't have any effect.
Any idea where to start debugging?
Edit 2018-02-20:
The subscription shown in the the second graph above has finally delivered/acked the stuck message, after 70h:
Under normal operations the oldest unacked message is typically below 2s old, until at some point some messages get stuck:
The text was updated successfully, but these errors were encountered: