Performance Strategy #4 #714
Replies: 1 comment
-
Hi! thanks for sharing your experience! What you have shared has sense. Overall, aside from how your workers are designed, one of the biggest costs of a worker pool is the overhead that it adds the fact of spawning constantly new workers. The required work for this task often adds some extra latency to whatever work you are handling (thus the On top of that, adding some minimum workers (and possibly some warm up tasks to maybe fill some cache, load some configuration or modules, etc.) can be beneficial. Doing what you did about handling within the same workers several kind of workloads its a good idea, as you'll keep reusing your workers often and the overhead of spawn might be at minimum. A contribution to the documentation is always welcomed! |
Beta Was this translation helpful? Give feedback.
-
Hello!
I've been using Piscina now for a few weeks, and I'm a big big fan!
I've been mulling over my own "performance optimization" and I wanted to see if you thought it was valid or not.
In my own project, I'm using the
exiftool-vendored
to read metadata,md5
to hash data,sharp
to transform / convert images, andffmpeg
to transform / convert video, all tasks which benefit greatly from operating on worker threads. When I was first starting to use Piscina, I started running into the thrashing issues you talk mention in the performance documentation.In addition to raising the
idleTimeout
suggested, I also ended up implementing the multiple workers in one file example with the dispatcher strategy, whereas before I had 4 different worker modules that were being loaded based on the job at hand. For me, this improved performance, as the libraries that were loaded are kept open for other jobs in the queue rather than being spun up / down. It also stopped some inadvertent memory leaks that were being caused by listeners as processes were being spun up / down with the creation of each worker.I wanted to see if what I'm theorizing is true to how Piscina works under the hood? If so, would you be open to a PR for the performance and multiple threads in one file documentation?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions