-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Microsoft.Azure.ServiceBus - System.InvalidOperationException: Can't create session when the connection is closing #13637
Comments
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @jfggdl. |
@mr-davidc Thanks for reaching out. You mentioned you have two environments and they use Microsoft.Azure.ServiceBus v3.4.0 and Microsoft.Azure.ServiceBus v4.1.3 respectively. We had a fix to convert this Is it convenient to use Microsoft.Azure.ServiceBus v4.1.3 in both environments and test it out? |
Hi @DorothySun216, I just double checked and yes I can confirm the exceptions I am currently seeing are being thrown from within v4.1.3 of Microsoft.Azure.ServiceBus in our Sandbox environment. The most recent set of exceptions recorded by Sentry was 4 days ago on 30-07. I have attached a screenshot of the Sentry page showing the version of the ServiceBus package and a screenshot of the number of exceptions received on that particular day. I should note that each of the exceptions in the list of exceptions refers to a different Service Bus queue/topic. In terms of using v4.1.3 in both environments, unfortunately it is not possible as of yet as we are still waiting for some other development work to be completed before it can be rolled to our Production environment. I am happy to provide any other information which might help track the issue down. Thanks |
Just 13 hours ago, the functions project experienced more of these types of exceptions. However, as an interesting point to add, I also have another completely different functions project (running the same Functions and ServiceBus DLL versions) deployed in the same cluster but a separate Kubernetes Pod which ALSO experienced the same exceptions at that time... Any ideas? |
@mr-davidc thanks for confirming on the version. Can you share with us a snippet of your code when you run into this error and we will see if we can repro it? Are you using ReceiveAsync or RegisterMessageHandler? We could try to translate this exception into communication exception if we can repro so the retry logic of SDK will auto retry if it's communication exception. |
So that's the thing @DorothySun216, I'm not actually directly using Queue trigger example: Topic trigger example: The exceptions appears to be coming from the functions runtime itself from the |
@mr-davidc Thanks for the info. I will reach out to the Azure Functions team to see how they are calling our API internally since we don't have access to that code. Are you blocked on this issue? |
@DorothySun216 No, not blocked as such but ideally I would like to get to a point where these exceptions no longer occur as they are essentially false positives at the moment. Looking forward to seeing what you come back with after discussing with the Azure Functions team. Thanks for your help. |
We are also experiencing the same exceptions ( It has started occurring from May this year and happens intermittently. We are using
and |
@tharidu thanks so much for reporting this. We already created an work item to track this. But due to our bandwidth and the fact that this is not a blocking error, it might take some time for investigation since we need to focus on high pri issues for now. I would recommend treat this as transient error for now and add retry mechanisms to deal with this. And I will update as soon as we figure out anything for a fix. Thanks. |
@tharidu Do you mind if I ask what version of Kubernetes your AKS cluster is running? |
@DorothySun216 Thanks for the reply (y) |
Ok thanks @tharidu. I was wondering if upgrading our cluster version might help in resolving the issue but I guess not. |
hello, we have also experienced the same issue and opened an incident with microsoft. While not blocked, it does cause rework due to retries and failures to dead letter. This affects several of us across many integrations. |
We have rolled out a fixed #17023 on latest release 5.1.0 and can you test if with this new nuget package, are you still seeing the same issue? https://www.nuget.org/packages/Microsoft.Azure.ServiceBus/5.1.0 |
One customer is still seeing this issue after upgrading to 5.1.0. There is a Singleton concurrent opening & closing bug that got fixed in the AMQP library 2.4.9 we depend on, which might affect the connection close problem in the case. Can you upgrade to Nuget dependency 5.1.1, https://www.nuget.org/packages/Microsoft.Azure.ServiceBus/ to see if the tests are passing? |
We have the same kind of problem and it's a huge blocker for us. We are using : |
Thanks for the updates @DorothySun216! This issue has gone stale over time. Since we havent heard back from the original set of issue reporters after @DorothySun216 released updates for the Microsoft.Azure.ServiceBus package, we are going to assume that the problem has been resolved. If not, please log a new issue @keodime We see your comment after @DorothySun216 made her post on the fixes made. Since it has been 5 months from the time you reported issues, can you confirm if you are still having the same problem? If so, please log a new issue and we can assist as needed. |
This issue is related to #9416 however I was asked to open a fresh thread.
Describe the bug
Intermittently for quite some time our Azure function instances running in AKS have been receiving the below exceptions coming through into Sentry.
We have a pod running .NET Core 2.2.8 with Functions v2 in our Production Kubernetes cluster and a different pod running .NET Core 3.1.5 with Functions v3 in our Sandbox cluster after recently upgrading and the exceptions are still being received from both pods intermittently. It seems to happen at random times, often days apart. I hoped upgrading to Function V3 might help to resolve the issue but alas it persists.
The production Functions pod references Microsoft.Azure.ServiceBus v3.4.0 and the sandbox Functions pod references Microsoft.Azure.ServiceBus v4.1.3.
The exception also seems to occur regardless of whether the function definition is for a Queue or Topic trigger.
Actual behavior (include Exception or Stack Trace)
Exception message:
Message processing error (Action=Receive, ClientId=MessageReceiver12account-events/Subscriptions/new-account-setup, EntityPath=account-events/Subscriptions/new-account-setup, Endpoint=sndbx-sb-project-au.servicebus.windows.net)
Note: It happens with lots of different service bus queues/topics, the exception message often relates to a different queue/topic each time.
Stack Trace:
System.InvalidOperationException: Can't create session when the connection is closing.
Module "Microsoft.Azure.ServiceBus.Core.MessageReceiver", in OnReceiveAsync
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "System.Runtime.CompilerServices.TaskAwaiter", in HandleNonSuccessAndDebuggerNotification
Module "Microsoft.Azure.ServiceBus.Core.MessageReceiver+<>c__DisplayClass64_0+<b__0>d", in MoveNext
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "Microsoft.Azure.ServiceBus.RetryPolicy", in RunOperation
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "Microsoft.Azure.ServiceBus.RetryPolicy", in RunOperation
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "System.Runtime.CompilerServices.TaskAwaiter", in HandleNonSuccessAndDebuggerNotification
Module "Microsoft.Azure.ServiceBus.Core.MessageReceiver", in ReceiveAsync
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "System.Runtime.CompilerServices.TaskAwaiter", in HandleNonSuccessAndDebuggerNotification
Module "Microsoft.Azure.ServiceBus.Core.MessageReceiver", in ReceiveAsync
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "System.Runtime.CompilerServices.TaskAwaiter", in HandleNonSuccessAndDebuggerNotification
Module "Microsoft.Azure.ServiceBus.MessageReceivePump+<b__11_0>d", in MoveNext
Another interesting piece of info, is that I am also receiving this exception as well at essentially the same time:
System.ObjectDisposedException: Cannot access a disposed object.
Object name: '$cbs'.
Module "Microsoft.Azure.ServiceBus.Core.MessageReceiver", in OnReceiveAsync
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "System.Runtime.CompilerServices.TaskAwaiter", in HandleNonSuccessAndDebuggerNotification
Module "Microsoft.Azure.ServiceBus.Core.MessageReceiver+<>c__DisplayClass64_0+<b__0>d", in MoveNext
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "Microsoft.Azure.ServiceBus.RetryPolicy", in RunOperation
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "Microsoft.Azure.ServiceBus.RetryPolicy", in RunOperation
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "System.Runtime.CompilerServices.TaskAwaiter", in HandleNonSuccessAndDebuggerNotification
Module "Microsoft.Azure.ServiceBus.Core.MessageReceiver", in ReceiveAsync
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "System.Runtime.CompilerServices.TaskAwaiter", in HandleNonSuccessAndDebuggerNotification
Module "Microsoft.Azure.ServiceBus.Core.MessageReceiver", in ReceiveAsync
Module "System.Runtime.ExceptionServices.ExceptionDispatchInfo", in Throw
Module "System.Runtime.CompilerServices.TaskAwaiter", in ThrowForNonSuccess
Module "System.Runtime.CompilerServices.TaskAwaiter", in HandleNonSuccessAndDebuggerNotification
Module "Microsoft.Azure.ServiceBus.MessageReceivePump+<b__11_0>d", in MoveNext
To Reproduce
Not too sure since it happens intermittently once the Function project is deployed. I have never encountered this exception when debugging locally.
An example of one of the Topic trigger function definitions is:
This is the csproj file (for the Sandbox Functions V3):
And the host.json file:
Environment:
Let me know if you require any more information and thanks in advance for your assistance.
The text was updated successfully, but these errors were encountered: