Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase default max heap only if memory subsystem is enabled #1797

Merged

Conversation

ashu-mehra
Copy link
Contributor

JVM should use higher percentage of memory as default max heap when
running in a cgroup only if -XX:+UseContainerSupport is specified.
Note that -XX:+UseContainerSupport enables JVM to use cgroup's memory
subsystem limits.
However, a missing check for memory subsystem in
MM_GCExtensions::computeDefaultMaxHeap() causes this change to be
enabled by default.
Fix is to add code for checking memory subsystem is enabled before using
higher percentage of cgroup memory limit as default max heap.

Signed-off-by: Ashutosh Mehra asmehra1@in.ibm.com

JVM should use higher percentage of memory as default max heap when
running in a cgroup only if -XX:+UseContainerSupport is specified.
Note that -XX:+UseContainerSupport enables JVM to use cgroup's memory
subsystem limits.
However, a missing check for memory subsystem in
MM_GCExtensions::computeDefaultMaxHeap() causes this change to be
enabled by default.
Fix is to add code for checking memory subsystem is enabled before using
higher percentage of cgroup memory limit as default max heap.

Signed-off-by: Ashutosh Mehra <asmehra1@in.ibm.com>
@ashu-mehra
Copy link
Contributor Author

ashu-mehra commented Apr 30, 2018

I realized the change done in #1517 has become a default behavior.

$ docker run -m2g --rm -it adoptopenjdk/openjdk8-openj9:nightly java -verbose:sizes -version | grep Xmx
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
  -Xmx1536M       memory maximum

Note that Xmx gets set to 1.5 G even though I didn't specify -XX:+UseContainerSupport option.

Ideally it should be protected by -XX:+UseContainerSupport option which enables memory subsystem in JVM. Therefore, I have added a check for memory subsystem is enabled in MM_GCExtensions::computeDefaultMaxHeap() to protect the changes done in #1517

@charliegracie
Copy link
Contributor

charliegracie commented Apr 30, 2018

I would expect that the JVM uses less memory by default in a container. This has been a significant feature for OpenJ9 compared to other JVMs. We should not be increasing the default Xmx when running in a container.

Can you provide more details on the rationale behind #1517? I am wondering if we should be backing #1517 out as this may cause a significant increase in footprint in Cloud environments.

@DanHeidinga and @dmitripivkine thoughts since you reviewed #1517?

@ashu-mehra
Copy link
Contributor Author

@charliegracie #1429 details the reason for the change made in #1517.
Please correct me if I am wrong but as per my understanding memory advantage of OpenJ9 is more because of conservative approach towards heap expansion, which wouldn't change even if default Xmx is increased in container. Isn't that the case?

@charliegracie
Copy link
Contributor

My belief is that OpenJ9 has a smaller footprint for many reasons but one of the major contributors is that by default we do pick as large a -Xmx as other implementations. Also the OpenJ9 GC policies do not expand as aggressively but these 2 features work together.

The important part I guess is that we do this iif the container is configured with a max. Is there a default max for docker containers? Can you distinguish between the default value and a value set via the command line / at build time? My personal preference would be to disable this feature if -Xtune:virtualized is used and go back to small heaps!

I think the changes for #1517 are actually good for non container JVM instances. In containers I want the heap to be as small as possible (while still providing reasonable performance) and on a single machine I want the heap to be aggressive in using as much resources as allowed. The changes for #1517 are going in the wrong direction in my opinion.

@DanHeidinga
Copy link
Member

I think the rationale behind #1517 is that its easier to control the max memory used for the Docker image at deployment then it is to rebuild the docker image with different -Xmx.

It gives the user 1 tuneable rather than trying to tune the -Xmx and docker image size.

@ashu-mehra
Copy link
Contributor Author

The important part I guess is that we do this iif the container is configured with a max. Is there a default max for docker containers? Can you distinguish between the default value and a value set via the command line / at build time?

There is no default max for docker containers. If the user doesn't specify memory limit for the docker container, it can use as much as it wants, just like any other process running on the host.
The change done in #1517 kicks into action only if the user is running docker container with memory limit. Eg if user runs docker run <container> (i.e. no memory limit) then Xmx would not be increased, but if user runs docker run -m2g <container> (i.e. 2G memory limit) then Xmx would be set to 75% of 2G.

Copy link
Member

@DanHeidinga DanHeidinga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@DanHeidinga
Copy link
Member

@charliegracie This looks like an improvement over the current code which should have included this check from the beginning. As this is GC code, I'll defer to you or @dmitripivkine but I this should be merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants