-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large task_processor_task table #4271
Comments
For now I have disabled a task processor so things can operate "normally" |
Can you confirm the value of the following env vars:
|
The second step would be to check if the |
I haven't set those. So they must be at their default value. They are not set in the default docker-compose.yaml either. I cannot confirm if t this was happening before. But I noticed this behavior today immediately after I upgraded from 1.109 to 1.126 <- not true looking at the PostgreSQL logs this has been going for a while. |
Two recurring tasks are locked.
|
Yeah, you can do that, but be aware of the performance impact of deleting such a large amount of data. For the future, you should unlock tasks.clean_up_old_tasks to make sure the task processor cleans up old tasks on its own. |
I've cleaned up things, but the remaining 7mio rows still pose a problem. SELECT times for |
Think you'd have to run |
This didn't help. The problem is in Rewriting the get_tasks_to_process to use
Adding priority into the index doesn't convince pg to use it. |
This index works...
default index does the following plan.
|
What is the value of |
There are a lot of factors that decide the query plan generate by postgres, e.g: our database generates the following plan with the default index(with random_page_cost 1.1):
I am going to close this issue now because we can't really change the index |
How are you running Flagsmith
Describe the bug
Our task_processor_task table has ~56 million records. It seems that the task processor is not keeping up and the query
SELECT * FROM get_tasks_to_process(10)
takes ~70s to complete. This seems to fill up the connections to pgbouncer and then up to the PostgreSQL and thus flagsmith itself is unable to perform things efficiently.If it helps we run clients in local eval mode and use segments to be able to turn some features on. (As this is the only option available in local evaluation mode)
Steps To Reproduce
Expected behavior
task processor is able to clear the table and not block other things.
Screenshots
No response
The text was updated successfully, but these errors were encountered: