RFR (M): 8212657: Implementation of JDK-8204089 Timely Reduce Unused Committed Memory
Ruslan Synytsky
rs at jelastic.com
Mon Dec 17 20:44:41 UTC 2018
Hi, a quick test results of build 24. The option XX:G1PeriodicGCInterval
works fine in our containers.
*[root at node193618-jdk12 ~]# java -XX:G1PeriodicGCInterval=3000 -jar app.jar
1*
*07:32:05 -> Init: 32M Used: 1M Committed: 32M Max: 3276M *
*07:32:08 -> Init: 32M Used: 800M Committed: 1389M Max: 3276M *
*total time = 3357, ms*
*done*
*07:32:11 -> Init: 32M Used: 1079M Committed: 1546M Max: 3276M *
*07:32:14 -> Init: 32M Used: 1080M Committed: 1546M Max: 3276M *
*07:32:17 -> Init: 32M Used: 1081M Committed: 1546M Max: 3276M *
*07:32:20 -> Init: 32M Used: 15M Committed: 32M Max: 3276M *
*07:32:23 -> Init: 32M Used: 15M Committed: 32M Max: 3276M *
*07:32:26 -> Init: 32M Used: 13M Committed: 32M Max: 3276M *
It's easy to repeat the test - just download app.jar
<https://github.com/jelastic/java-vertical-scaling-test/blob/master/dist/app.jar>
and
run
*java -XX:G1PeriodicGCInterval=3000 -jar app.jar 1 *
*+ one more <Enter> in a second *
There are couple small improvement suggestions related to the description
at JEP 346 <http://openjdk.java.net/jeps/346>
*G1 triggers a this periodic garbage collection if both: *
*1) More than G1PeriodicGCInterval milliseconds have passed since any
previous garbage collection pause...*
*2) The average one-minute system load value as returned by the
getloadavg() call on the host system is above
G1PeriodicGCSystemLoadThreshold. This condition is ignored if
G1PeriodicGCSystemLoadThreshold is zero.*
The description of loadavg threshold can be improved in the following way:
*2) The average one-minute system load value as returned by the
getloadavg() call on the JVM host system (VM or container) is below
G1PeriodicGCSystemLoadThreshold. This condition is ignored if
G1PeriodicGCSystemLoadThreshold is zero.*
I believe the red is a technical mistake in the description, the source
code comment is quite confusing too, but at least it sounds correct:
*Maximum recent system wide system load as returned by the 1m value of
getloadavg() at which G1 triggers a periodic GC. A load above this value
cancels a given periodic GC. *
Also the blue addition provides more clarity on the host system.
One more minor suggestion - *G1PeriodicGCInterval *should be defined in
milliseconds at the moment which seems impractical to me. Not clear why
somebody would need to trigger GC less than in minutes, so people will
always write a lot of 000. Can we update it to seconds at least?
Thanks
On Thu, 15 Nov 2018 at 18:10, Ruslan Synytsky <rs at jelastic.com> wrote:
> Hi guys, some new findings related to this topic. Previously we got a
> great question from Stefan Johansson:
>
> >> Another question, when
> >> running in the cloud, what load is the user expecting us to compare
> >> against, the overall system or the local container. I'm actually not
> >> entirely sure what the getloadavg() call return in case of running in a
> >> container.
>
> > Good question! It depends on the used container technology. In short, if
> it’s a system
> > container then it shows the load of the container, if it’s an
> application container then the
> > load of the host machine. There is an article on a related topic
> >
> https://jelastic.com/blog/java-and-memory-limits-in-containers-lxc-docker-and-openvz/
>
> I found more details / info that will be useful for end users, there is a
> quick summary:
>
> - VMs - the problem does not exist as JVM gets loadavg of the parent
> virtual machine it’s running in.
> - Containers
> - LXC - the problem does not exist, because LXCFS
> <https://github.com/lxc/lxcfs> is making Linux containers feel more
> like a virtual machine.
> - OpenVZ - the problem does not exist as every container has a
> virtualized view of /proc pseudo-filesystem
> - Docker and runC - by default loadavg will be provided from the
> host machine, which is a kind of problem for determining the real load
> inside a container. However, the recent improvements in runC engine are
> solving this issue: libcontainer: add /proc/loadavg to the white
> list of bind mount
> <https://github.com/opencontainers/runc/pull/1882>. Also there is a
> useful related article - LXCFS for Docker and K8S
> <https://medium.com/@Alibaba_Cloud/kubernetes-demystified-using-lxcfs-to-improve-container-resource-visibility-86f48ce20c6>.
> So, we can assume that this solution will be available by default in the
> near future.
>
> Also, some kind of a quick summary / "a state of JVM elasticity":
>
> - G1 - the new options will be introduced soon:
> *G1PeriodicGCInterval, **G1PeriodicGCSystemLoadThreshold,
> G1PeriodicGCInvokesConcurrent, *the work is in progress.
> - Shenandoah - the leading GC in terms of elasticity at the moment (my
> personal opinion). Available options: *ShenandoahUncommitDelay,
> ShenandoahGuaranteedGCInterval.*
> - OpenJ9 - introduced special options: *IdleTuningGcOnIdle,
> IdleTuningCompactOnIdle, IdleTuningMinIdleWaitTime. *However, the
> uncommitment is not fully transparent for end users, as a result it's
> harder to track the real usage. The only way to measure the effect is to
> monitor resident memory size *RES* using *top*. Also it implements
> different approach for checking idle sate - based on
> *samplerThreadStateLogic.* For more details please check to this
> conversation
> <https://github.com/eclipse/openj9/issues/2312#issuecomment-431453020>
> .
> - ZGC - the work is in the progress, there is a quick patch
> <http://cr.openjdk.java.net/~pliden/zgc/zrelease_unused_heap/webrev.0>
> to support releasing of memory back to the OS in ZGC. If you want to
> try it out, the patch should apply to the latest jdk/jdk tree. Use the new
> option *-XX:+ZReleaseUnusedHeap*. A more refined version of the patch
> will most likely be upstreamed at some point in the future.
> - Azul C4 - seems like it's scalable vertically too. I sent
> clarification request to Deputy CTO, but have not received any feedback so
> far, so it's unknown how exactly it works. If anyone can share his/her
> personal experience than will be useful.
>
> Thanks everybody involved for moving this topic forward.
> Regards
> --
> Ruslan
> CEO @ Jelastic <https://jelastic.com/>
>
--
Ruslan
CEO @ Jelastic <https://jelastic.com/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20181217/49ef2de5/attachment.htm>
More information about the hotspot-gc-dev
mailing list