-
-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too much log output seems to cause processes to freeze #57
Comments
Hi @john-rodewald, Thank you for trying Process Compose and for letting me know about its issues. A few questions:
Also if you'll give me some numbers, I will try to reconstruct it on my end.
P.S. |
This is in TUI mode. I've tried with
None is configured at the moment. I now tried setting it to
Tried this with no difference in behaviour.
There's a set of reproduction steps I can follow that reliably causes freezing. It seems that we're in the range of ~100 log lines per second before I see freezing. By the time I observe this peak, the process has already stopped responding. Reducing verbosity brings me down to a peak of ~20 log lines per second. In this case, no freezing occurs. I find these numbers to be fairly low, both in absolute and relative terms. 🤔 |
This is very helpful. Another question, this one is probably more specific to your business logic. How do you measure (or experience) the freezes? |
I'm glad to hear!
We only use |
Seems for me this is case too. Also reproduced - dump 5MB text file - freezes. |
I created a small test environment that prints about 100K log lines/sec. When it runs outside process-compose, it takes ~1.1s to print 100K lines: When inside process-compose, it takes ~1.3s to print 100K lines: It was running for a few minutes and was relatively stable [1.2s-1.4s] - no freezes. Will you give this test a try on your system? P.S. |
Not sure if that response to me) But did you tried one big 5mb or like line? not many lines, but one big? |
I guess tui may fail on such. |
I tried to Not sure that it's the same issue @john-rodewald is facing. |
20 no, but i have 5 yes. i run wasm hosts, and they sometime respond into logs with wasm blobs base64 encoded one liners. |
@dzmitry-lahoda Do you mind opening a separate issue for this scenario? |
Thanks for checking! I tried running this both with To rule out issues with the actual log content, I captured This is bizarre. 🤔 |
This is bizarre indeed... A few external factors to consider:
|
Hi, @john-rodewald, can you please update if you discovered any additional information that can help to reconstruct the issue? Another question. When you experience those freezes, does the logger stop entirely (until restarted) or slow down for some time? For how long? |
I ran into this issue as well. One of my process is logging an excessively long line and it hangs writing to stdout. I had a lot of goroutines in my process stuck on zap logger trying to log to stdout (and doing it under a lock to probably prevent mangling lines). I recompiled process-compose with pprof enabled and dumped the goroutines. At a glance, nothing seems to be deadlocked, but I do see a bunch of goroutines scanning lines (see https://gist.github.com/appaquet/d960f5f4bf7aae018971735c5f73b0a4), as it should be. I then tried increasing the max token size to a bigger value, and it does fix my issue:
Perhaps making this max line value configurable per process could be a fix |
Hi @appaquet, You are absolutely right, this is precisely the place that caused it to hang for @dzmitry-lahoda. In one of the latest versions, I added an explicit error printed to the process compose log. Maybe it's time for the troubleshooting section in Readme. I am not sure, though that this is the error @john-rodewald is facing. |
I can confirm that I have the same problem (some log line are 10MB). Why not raise it to a large but capped amount of ram? I personally don't care if the buffer can grow up to 128MB in dev mode. Alternatively, maybe rely on a fail-safe scanner which would yield a partial line if the buffer is exhausted? |
I just hit the same issue with Postgresql when logging all SQL statements and inserting a 2.3MB blob. Thought I was going insane until I noticed that it wasn't my application or postgresql, but process-compose itself that completely froze and had to be killed by force. |
hm, may bw there is some non tmux thing to handle logs? btw, can tmux do background sessions? |
Fixed. Will be part of the next release. |
Fixed in v0.81.4 |
Hi! We use this tool in our team to orchestrate a web application on our development machines. It's been very pleasant so far.
One of the tools in our stack managed by
process-compose
is Hasura. Specifically, the process is a shell script that sets environment variables and then executes graphql-engine.For the longest time, we've observed
graphql-engine
occasionally freezing up for no apparent reason. Aprocess-compose
restart is enough to get it to behave again. Today I discovered something new - the problem is:graphql-engine
output more logs (when running it throughprocess-compose
)graphql-engine
verbosity (when running it throughprocess-compose
)graphql-engine
outside ofprocess-compose
One way that I can run
graphql-engine
throughprocess-compose
without any freezing is by redirectingstdout
to a log file, i.e. adding1>graphql-engine.log
to the process shell script. Setting alog_location
and modifyinglog_level
inprocess-compose.yaml
did not seem to fix freezing.I can't say if the sheer amount of logs is what causes
process-compose
to choke or if something else is going on. If there exists an error log ofprocess-compose
itself or any other information that may be useful, let me know and I'll try to provide.I'll expand this issue if I discover anything more.
The text was updated successfully, but these errors were encountered: