Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: respect memory ulimit more precisely #5049

Closed
rsc opened this issue Mar 14, 2013 · 10 comments
Closed

runtime: respect memory ulimit more precisely #5049

rsc opened this issue Mar 14, 2013 · 10 comments

Comments

@rsc
Copy link
Contributor

rsc commented Mar 14, 2013

The new mheap allocation takes 256 MB of virtual memory and the ulimit-respecting code
does not deal with that so programs running under ulimits fail. More generally, the
ulimit is used to size the heap, but we have significant memory resources outside the
heap - like goroutine stacks! - that must fall under the limit too. 

https://golang.org/cl/7672044 has an attempt at this. It mostly works, but it
fails with SIGBUS on OS X in the net/http short test. I have not yet tried it on Linux -
perhaps a gdb will be more helpful there.
@gopherbot
Copy link
Contributor

Comment 1 by jeff.allen:

When I try CL 7672044 on 64-bit Linux, I get
runtime: address space conflict: map(0xbfffff0000) = 0x7fc2d6a4c000
fatal error: runtime: address space conflict
while running make.bash, when it tries to do build cmd/go.

@robpike
Copy link
Contributor

robpike commented Mar 26, 2013

Comment 2:

https://golang.org/cl/7741050/

@ianlancetaylor
Copy link
Contributor

Comment 3:

Disabled the rlimit check for 1.1.

Labels changed: removed go1.1.

@rsc
Copy link
Contributor Author

rsc commented Jul 30, 2013

Comment 4:

Labels changed: added priority-someday, removed priority-later.

@rsc
Copy link
Contributor Author

rsc commented Dec 4, 2013

Comment 5:

Labels changed: added repo-main.

@rsc
Copy link
Contributor Author

rsc commented Mar 3, 2014

Comment 6:

Adding Release=None to all Priority=Someday bugs.

Labels changed: added release-none.

@rsc rsc added this to the Unplanned milestone Apr 10, 2015
mithro added a commit to mithro/go that referenced this issue Mar 16, 2016
----------------------------------------------------------------------------

Currently if the go runtime tries to create a new system thread and is unable
to do so, it will fail with an error like;
```
18:22:18.752169 [go test -timeout 600s -v -race ./common/paniccatcher] was slow: 3m11.377s
runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x7f54695bdcc9
```

One reason for this occurring is the system have a low "process limit". For a
long time it was fairly common for systems to allow 10k or more, but with
systemd and Linux 4.3 the default limit can be as little as 512.

Most of the code which calls pthread_create in src/runtime/cgo seems to do
something like;
```
	err = pthread_create(&p, &attr, threadentry, ts);

	pthread_sigmask(SIG_SETMASK, &oset, nil);

	if (err != 0) {
		fprintf(stderr, "runtime/cgo: pthread_create failed: %s\n", strerror(err));
		abort();
	}
```
This actually seems reasonable as recovering from thread creation is pretty
hard. As well, creating more then your system's process limit does feel like a
"just don't do that" type things.

However, from what I can see goroutine scheduler will create up to
sched.maxmcount threads and this is set to be initialized to 10k in proc.go at
line 425 (https://github.com/golang/go/blob/master/src/runtime/proc.go#L425).

Linux provides an API for getting the current thread limit, the getrlimit call
with RLIMIT_NPROC (see http://man7.org/linux/man-pages/man2/setrlimit.2.html)
which already seems to be exposed to Go code as syscall.Getrlimit but it is
missing the RLIMIT_NPROC constant needed to get the information.

This is similar to idea of respecting memlimit see
https://github.com/golang/go/blob/master/src/runtime/os1_linux.go#L270 and
probably related to golang#5049

----------------------------------------------------------------------------
@robpike
Copy link
Contributor

robpike commented Aug 22, 2016

Timed out.

@mihasya
Copy link

mihasya commented Aug 22, 2016

Incredible timing for that "time out." We just found this issue because we're hitting this exact problem with our services running in Marathon. What can be done to help get this prioritized and fixed? I imagine this is a problem for anyone running Golang apps in an environment such as Marathon or Kubernetes - those environments are only getting more common.

@bradfitz
Copy link
Contributor

@mihasya, we're just merging multiple issues into a more focused one. This is still a problem, but we're tracking in #16843 now.

@gopherbot
Copy link
Contributor

CL https://golang.org/cl/35252 mentions this issue.

gopherbot pushed a commit that referenced this issue Feb 7, 2017
mallocinit has evolved organically. Make a pass to clean it up in
various ways:

1. Merge the computation of spansSize and bitmapSize. These were
   computed on every loop iteration of two different loops, but always
   have the same value, which can be derived directly from _MaxMem.
   This also avoids over-reserving these on MIPS, were _MaxArena32 is
   larger than _MaxMem.

2. Remove the ulimit -v logic. It's been disabled for many releases
   and the dead code paths to support it are even more wrong now than
   they were when it was first disabled, since now we *must* reserve
   spans and bitmaps for the full address space.

3. Make it clear that we're using a simple linear allocation to lay
   out the spans, bitmap, and arena spaces. Previously there were a
   lot of redundant pointer computations. Now we just bump p1 up as we
   reserve the spaces.

In preparation for #18651.

Updates #5049 (respect ulimit).

Change-Id: Icbe66570d3a7a17bea227dc54fb3c4978b52a3af
Reviewed-on: https://go-review.googlesource.com/35252
Reviewed-by: Russ Cox <rsc@golang.org>
@golang golang locked and limited conversation to collaborators Feb 6, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants