-
-
Notifications
You must be signed in to change notification settings - Fork 754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
endless loop while calling DefaultBroadcaster.removeAtmosphereResource() #1492
Comments
Can you pas the entire stack trace? For sure this is not a bug in Atmosphere :-) I'm closing the bug, but if you can add the complete dump I can take a look. |
Updated stack. Out of curiosity: why would you synchronize on DefaultBroadcaster.resources while it is already Thread-safe? |
Where is the updated stack? I'm seeing your comments, but not the stack. |
|
I means the complete stack trace :-) |
How can it be more "complete" when it starts with java.lang.Thread.run(Thread.java:744)? |
I need the full thread dump of the VM, not only one thread. One thread I cannot help. |
I don't think the thread is stuck since the state is runnable. You observe it with high load, right? |
Yes, it's hard to reproduce, but it will happen eventually. Do you have a suggestion for another Queue implementation I could experiment with? |
When it happens, it the thread marked as blocked? I don't think another queue will help. I don't know if you can update, but trying 1.0.18/19 may help for sure, but will require some work. |
blocking is not the problem here. I will try 1.0.18 then ;-) |
I’m running Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed mode), using Atmosphere 1.0.13. I know this is not the most recent version, but I see the same logic is still used on the Master branch.
I’m getting an endless loop while calling DefaultBroadcaster.removeAtmosphereResource(). Here is the full stack:
So, the stack above just seems to “hang” in the JVM, resulting in 1 core getting 100% CPU load. By “hanging” I mean literally, hanging all day on that same stack frame. I have been making several stack dumps to ensure this.
Of course, I see that this problem may be inside ConcurrentLinkedQueue. However, the queue is about 1000 elements long, so I wouldn’t consider that “big” in any way.
The text was updated successfully, but these errors were encountered: