RFR (M): 8212657: Implementation of JDK-8204089 Timely Reduce Unused Committed Memory
Ruslan Synytsky
synytskyy at jelastic.com
Thu Nov 15 17:10:45 UTC 2018
Hi guys, some new findings related to this topic. Previously we got a great
question from Stefan Johansson:
>> Another question, when
>> running in the cloud, what load is the user expecting us to compare
>> against, the overall system or the local container. I'm actually not
>> entirely sure what the getloadavg() call return in case of running in a
>> container.
> Good question! It depends on the used container technology. In short, if
it’s a system
> container then it shows the load of the container, if it’s an application
container then the
> load of the host machine. There is an article on a related topic
>
https://jelastic.com/blog/java-and-memory-limits-in-containers-lxc-docker-and-openvz/
I found more details / info that will be useful for end users, there is a
quick summary:
- VMs - the problem does not exist as JVM gets loadavg of the parent
virtual machine it’s running in.
- Containers
- LXC - the problem does not exist, because LXCFS
<https://github.com/lxc/lxcfs> is making Linux containers feel more
like a virtual machine.
- OpenVZ - the problem does not exist as every container has a
virtualized view of /proc pseudo-filesystem
- Docker and runC - by default loadavg will be provided from the host
machine, which is a kind of problem for determining the real
load inside a
container. However, the recent improvements in runC engine are
solving this
issue: libcontainer: add /proc/loadavg to the white list of bind mount
<https://github.com/opencontainers/runc/pull/1882>. Also there is a
useful related article - LXCFS for Docker and K8S
<https://medium.com/@Alibaba_Cloud/kubernetes-demystified-using-lxcfs-to-improve-container-resource-visibility-86f48ce20c6>.
So, we can assume that this solution will be available by default in the
near future.
Also, some kind of a quick summary / "a state of JVM elasticity":
- G1 - the new options will be introduced soon:
*G1PeriodicGCInterval, **G1PeriodicGCSystemLoadThreshold,
G1PeriodicGCInvokesConcurrent, *the work is in progress.
- Shenandoah - the leading GC in terms of elasticity at the moment (my
personal opinion). Available options: *ShenandoahUncommitDelay,
ShenandoahGuaranteedGCInterval.*
- OpenJ9 - introduced special options: *IdleTuningGcOnIdle,
IdleTuningCompactOnIdle, IdleTuningMinIdleWaitTime. *However, the
uncommitment is not fully transparent for end users, as a result it's
harder to track the real usage. The only way to measure the effect is to
monitor resident memory size *RES* using *top*. Also it implements
different approach for checking idle sate - based on
*samplerThreadStateLogic.* For more details please check to this
conversation
<https://github.com/eclipse/openj9/issues/2312#issuecomment-431453020>.
- ZGC - the work is in the progress, there is a quick patch
<http://cr.openjdk.java.net/~pliden/zgc/zrelease_unused_heap/webrev.0>
to support releasing of memory back to the OS in ZGC. If you want to try
it out, the patch should apply to the latest jdk/jdk tree. Use the new
option *-XX:+ZReleaseUnusedHeap*. A more refined version of the patch
will most likely be upstreamed at some point in the future.
- Azul C4 - seems like it's scalable vertically too. I sent
clarification request to Deputy CTO, but have not received any feedback so
far, so it's unknown how exactly it works. If anyone can share his/her
personal experience than will be useful.
Thanks everybody involved for moving this topic forward.
Regards
--
Ruslan
CEO @ Jelastic <https://jelastic.com/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20181115/221f5ca2/attachment.htm>
More information about the hotspot-gc-dev
mailing list