Replies: 2 comments 2 replies
-
One option would be to limit the number of concurrent runs your deployment allows using concurrency limits https://docs.dagster.io/guides/limiting-concurrency-in-data-pipelines#limiting-overall-runs |
Beta Was this translation helpful? Give feedback.
2 replies
-
Looks like this is hard-coded in dagster_postgres without an option to configure it: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I have the dagster application with dagster-postgres as a storage deployed using the official helm chart.
Currently, I have about 10 recurring jobs, running each minute. The problem is, once the jobs run all in parallel, there is an obvious concurrence between the tasks regarding connection to the dagster storage (postgres in this case).
Exception :
My question is, can this be avoided by some configuration on the dagster side? For example setting the pool size for the component that is connecting to dagster? Or do I have to either increase the connection limit or setup some pooling between the dagster and my postgres db?
Thank you
Beta Was this translation helpful? Give feedback.
All reactions