You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ten messages are posted one by one in the ActiveMQ Artemis queue. Out of 10, 9 jobs spin up correctly and execute to completion. One job never spawned. I waited for 30 minutes. I verified that the unconsumed message is present in the queue. There are no Running jobs. But the KEDA operator pod logs always says:
Number of running jobs = 1, Number of pending jobs = 1.
To overcome this, I deleted the message in the queue. Even then, the KEDA operator pod logs always says:
Number of running jobs = 1, Number of pending jobs = 1.
I had to delete the ScaledJob and apply it back again to get rid of the erroneous log messages. I think the queue information is cached within KEDA operator and its not refreshed.
This issue was not observed in the next execution. But seen again in the third run. This scaling issue is often seen when there are more than 10 messages in the queue.
The same issue is seen for other ScaledJob specs also.
Expected Behavior
The scaling of jobs must happen accurately, irrespective of the number of messages in the queue.
Keda operator must report correct information in the logs.
Actual Behavior
KEDA operator is not scaling the jobs correctly. Incorrect info is cached in it.
Steps to Reproduce the Problem
Create a ScaledJob spec as per the attachment. Post 10 or more messages into the ActiveMQ Artemis queue
Observe the scaling of Jobs
Repeat the steps
Logs from KEDA operator
2024-08-06T11:11:38Z INFO scaleexecutor Scaling Jobs {"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Number of running Jobs": 1}
2024-08-06T11:11:38Z INFO scaleexecutor Scaling Jobs {"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Number of pending Jobs ": 1}
2024-08-06T11:11:38Z INFO scaleexecutor Creating jobs {"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Effective number of max jobs": 0}
2024-08-06T11:11:38Z INFO scaleexecutor Creating jobs {"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Number of jobs": 0}
2024-08-06T11:11:38Z INFO scaleexecutor Created jobs {"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Number of jobs": 0}
2024-08-06T11:11:38Z INFO scaleexecutor Scaling Jobs {"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Number of running Jobs": 1}
2024-08-06T11:11:38Z INFO scaleexecutor Scaling Jobs {"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Number of pending Jobs ": 1}
2024-08-06T11:11:38Z INFO scaleexecutor Creating jobs {"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Effective number of max jobs": 0}
2024-08-06T11:11:38Z INFO scaleexecutor Creating jobs {"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Number of jobs": 0}
2024-08-06T11:11:38Z INFO scaleexecutor Created jobs {"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Number of jobs": 0}
THis log
KEDA Version
2.13.0
Kubernetes Version
1.28
Platform
Other
Scaler Details
ActiveMQ Artemis
Anything else?
Kindly let me know if there is any configuration missed in the spec that needs to added or modified, or any other workaround. Please treat this issue with high priority as this problem is seen in the customer deployment.
Is this issue fixed in the newer version of KEDA?
The text was updated successfully, but these errors were encountered:
Hello
Is there any reason to use accurate as strategy? IDK how ActiveMQ works but I'd say that default is enough and should work better in your case. Are you picking & locking messages? AFAIR, ActiveMQ doesn't expose locked messages as visible.
Please treat this issue with high priority as this problem is seen in the customer deployment.
This is an OSS project and we are in summer, we will try to help as much as we can, but don't expect any kind of priority rather than the best effort. If you are in hurry, I'd suggest contacting the vendors which offers enterprise support for KEDA 😄
Report
The ScaledJob spec.
Ten messages are posted one by one in the ActiveMQ Artemis queue. Out of 10, 9 jobs spin up correctly and execute to completion. One job never spawned. I waited for 30 minutes. I verified that the unconsumed message is present in the queue. There are no
Running
jobs. But the KEDA operator pod logs always says:Number of running jobs = 1, Number of pending jobs = 1.
To overcome this, I deleted the message in the queue. Even then, the KEDA operator pod logs always says:
Number of running jobs = 1, Number of pending jobs = 1.
I had to delete the ScaledJob and apply it back again to get rid of the erroneous log messages. I think the queue information is cached within KEDA operator and its not refreshed.
This issue was not observed in the next execution. But seen again in the third run. This scaling issue is often seen when there are more than 10 messages in the queue.
The same issue is seen for other ScaledJob specs also.
Expected Behavior
Actual Behavior
KEDA operator is not scaling the jobs correctly. Incorrect info is cached in it.
Steps to Reproduce the Problem
Logs from KEDA operator
THis log
KEDA Version
2.13.0
Kubernetes Version
1.28
Platform
Other
Scaler Details
ActiveMQ Artemis
Anything else?
Kindly let me know if there is any configuration missed in the spec that needs to added or modified, or any other workaround. Please treat this issue with high priority as this problem is seen in the customer deployment.
Is this issue fixed in the newer version of KEDA?
The text was updated successfully, but these errors were encountered: