-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
containerpilot spawns many threads (LWP) #490
Comments
You've got 2 ContainerPilot processes, as we'd expect. Based on the
|
I'm assuming this is a LX Brand zone or Docker on sdc-docker container? I'd be really interested to see the output of the process tree from the perspective of Linux via |
Right now the only place where I see an opportunity for a resource leak in that code section is where we spawn a goroutine to reap child processes (see sup.go#L45). I'm not sure I can see a case in the handler where we'd find ourselves in the kind of infinite loop that would cause these goroutines to be persistent. But we should probably move the entire signal handler for |
The container was launched using sdc-docker. And although the container already died, I could get some docker logs from it, probably confirming your suspicion about the file. Here are snippet:
The logs are identical to all goroutines that are logged. There are 2539 goroutines in total in the log, but I'm sure logs for other 1000 goroutines disappeared because of log rotation. This is the entire log file: log.txt |
containerpilot version: 2.7.7
cloud: samsung private cloud (based on JPC)
I noticed that containerpilot process is creating many LWPs.
Below is the output of /native/usr/bin/prstat
There's a containerpilot process at the bottom, which has 3520 LWPs. We discovered this issue while investigating the reason of "resource temporarily unavailable" when forking a new process. Note that even though the problematic containerpilot's PID is 31874 on prstat, it's actually PID 1. The process also uses quite a large amount of memory 254M, probably because of threads stack.
The text was updated successfully, but these errors were encountered: