Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large query results causing Redis to lose connection #3669

Open
danielfaulknor opened this issue Apr 2, 2019 · 1 comment
Open

Large query results causing Redis to lose connection #3669

danielfaulknor opened this issue Apr 2, 2019 · 1 comment

Comments

@danielfaulknor
Copy link

Issue Summary

A very large number of results (960,000) causes Redis to give up during saving_results

Steps to Reproduce

  1. Run a query that generates 960,000 rows
  2. Watch it fail in the logs (https://gist.github.com/danielfaulknor/b6f6a99246cfcaa23e5020201d5c7fb8)

Why is this a bug in my view - This is constantly reproducible, and doesn't produce an error in the GUI, it just gets stuck on saving_results state. I have to run redis-cli flushall to clear the query.

The docker host has 12GB RAM and doesn't get past about 7.5GB in use.

Technical details:

  • Redash Version: 5.0.2+b5486
  • Browser/OS: Chrome on Windows
  • How did you install Redash: docker-compose
@zachliu
Copy link
Contributor

zachliu commented Apr 3, 2019

@danielfaulknor Have you tried to play with the CELERYD_MAX_TASKS_PER_CHILD(celery 3) or worker_max_memory_per_child(celery 4) parameter?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants