Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ActiveMQ Artemis Scalar: ScaledJob not scaling correctly #6039

Closed
amitherlekar opened this issue Aug 6, 2024 · 2 comments
Closed

ActiveMQ Artemis Scalar: ScaledJob not scaling correctly #6039

amitherlekar opened this issue Aug 6, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@amitherlekar
Copy link

amitherlekar commented Aug 6, 2024

Report

The ScaledJob spec.

Ten messages are posted one by one in the ActiveMQ Artemis queue. Out of 10, 9 jobs spin up correctly and execute to completion. One job never spawned. I waited for 30 minutes. I verified that the unconsumed message is present in the queue. There are no Running jobs. But the KEDA operator pod logs always says:
Number of running jobs = 1, Number of pending jobs = 1.

To overcome this, I deleted the message in the queue. Even then, the KEDA operator pod logs always says:
Number of running jobs = 1, Number of pending jobs = 1.

I had to delete the ScaledJob and apply it back again to get rid of the erroneous log messages. I think the queue information is cached within KEDA operator and its not refreshed.

This issue was not observed in the next execution. But seen again in the third run. This scaling issue is often seen when there are more than 10 messages in the queue.

The same issue is seen for other ScaledJob specs also.

Expected Behavior

  1. The scaling of jobs must happen accurately, irrespective of the number of messages in the queue.
  2. Keda operator must report correct information in the logs.

Actual Behavior

KEDA operator is not scaling the jobs correctly. Incorrect info is cached in it.

Steps to Reproduce the Problem

  1. Create a ScaledJob spec as per the attachment. Post 10 or more messages into the ActiveMQ Artemis queue
  2. Observe the scaling of Jobs
  3. Repeat the steps

Logs from KEDA operator

2024-08-06T11:11:38Z	INFO	scaleexecutor	Scaling Jobs	{"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Number of running Jobs": 1}
2024-08-06T11:11:38Z	INFO	scaleexecutor	Scaling Jobs	{"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Number of pending Jobs ": 1}
2024-08-06T11:11:38Z	INFO	scaleexecutor	Creating jobs	{"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Effective number of max jobs": 0}
2024-08-06T11:11:38Z	INFO	scaleexecutor	Creating jobs	{"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Number of jobs": 0}
2024-08-06T11:11:38Z	INFO	scaleexecutor	Created jobs	{"scaledJob.Name": "fe-supervised-job", "scaledJob.Namespace": "road", "Number of jobs": 0}
2024-08-06T11:11:38Z	INFO	scaleexecutor	Scaling Jobs	{"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Number of running Jobs": 1}
2024-08-06T11:11:38Z	INFO	scaleexecutor	Scaling Jobs	{"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Number of pending Jobs ": 1}
2024-08-06T11:11:38Z	INFO	scaleexecutor	Creating jobs	{"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Effective number of max jobs": 0}
2024-08-06T11:11:38Z	INFO	scaleexecutor	Creating jobs	{"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Number of jobs": 0}
2024-08-06T11:11:38Z	INFO	scaleexecutor	Created jobs	{"scaledJob.Name": "fe-unsupervised-job", "scaledJob.Namespace": "road", "Number of jobs": 0}

THis log

KEDA Version

2.13.0

Kubernetes Version

1.28

Platform

Other

Scaler Details

ActiveMQ Artemis

Anything else?

Kindly let me know if there is any configuration missed in the spec that needs to added or modified, or any other workaround. Please treat this issue with high priority as this problem is seen in the customer deployment.

Is this issue fixed in the newer version of KEDA?

@amitherlekar amitherlekar added the bug Something isn't working label Aug 6, 2024
@JorTurFer
Copy link
Member

Hello
Is there any reason to use accurate as strategy? IDK how ActiveMQ works but I'd say that default is enough and should work better in your case. Are you picking & locking messages? AFAIR, ActiveMQ doesn't expose locked messages as visible.

Please treat this issue with high priority as this problem is seen in the customer deployment.

This is an OSS project and we are in summer, we will try to help as much as we can, but don't expect any kind of priority rather than the best effort. If you are in hurry, I'd suggest contacting the vendors which offers enterprise support for KEDA 😄

@amitherlekar
Copy link
Author

After setting scaling strategy to "default", it works as expected. Thank you. I am closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: Ready To Ship
Development

No branches or pull requests

2 participants