Analyzing Efficiency Of Java Memory Usage

Volker Simonis volker.simonis at gmail.com
Fri Jul 17 14:43:55 UTC 2020


On Wed, Jul 15, 2020 at 9:07 PM Ruslan Synytsky <rs at jelastic.com> wrote:
>
> Hi all, I would like to share some preliminary results related to Java
> elasticity analysis. We collected metrics from about 1000 containers. The
> goal was to get a rough picture of the memory usage efficiency within our
> user base.These containers are running different software stacks, java
> versions, dev or production workloads, and they belong to different end
> users. Each container may have one or multiple Java processes running
> inside and even additional auxiliary non Java processes.
>
> At the following link you can find a dynamic chart that reflects memory
> usage at specific period of time (it's one time measurement)
> https://cs.demo.jelastic.com/chart&appid=cc492725f550fcd637ab8a7c1f9810c9
> where
> - X axis represent Hosts, each host is a separated container
> - Y Axis:
>   * OSTotal: total RAM available for container
>   * Xmx: max heap limit
>   * NonHeapCommitted+: committed non heap stacked on top of committed heap
> memory
>   * HeapCommitted: committed heap memory
>   * HeapUsed: used heap memory
>   * OSBuffCache+: OS memory in buffer / cache stacked on top of OS used
> memory
>   * OSUsed: OS used memory
>   * Count: number of java processes running inside the host
>   ---
>   * Java Version per each Java process
>   * GC type per each Java process
>   * Java Agent: n/count - how many processes are using jelastic GC agent
>  (was used before JEP 346)
>   * G1PeriodicGCInterval: n/count - how many processes are using this new
> option
>
> The chart is ordered from left to right by the following sequence
> OSTotal, Xmx, NonHeapCommitted, HeapCommitted,
> HeapUsed, OSBuffCache, OSUsed.
>
> In general, the results look good as the vast majority of java processes
> are running in a quite efficient mode (small difference between used heap
> and committed heap). There are some java processes that have a big
> difference between used heap and committed heap (for example host 91 or
> host 109), but such processes do not use memory usage optimization (no
> configured GC agent and no G1PeriodicGCInterval) or they use non-compacting
> GC. Interesting findings that a) sometimes committed heap memory is
> higher than real OS used memory

Yes, this is a long standing, annoying problem which is most probably
caused by the fact "Committed" as reported by the VM/NMT is not really
the same like RSS reported by system tools. I've opened "8249666:
Improve Native Memory Tracking to report the actual RSS usage" [0] to
track this. Here's an excerpt from the issue describing the problem:

Currently, NMT shows allocated memory as either "Reserved" or
"Committed". Reserved memory is actually just reserved, virtual
address space which was mmaped with MAP_NORESERVE, while Committed
memory is mapped without MAP_NORESERVE. In the output of top or pmap,
both Reserved and Committed show up as "Virtual" memory until they
will be used for the first time (i.e. touched). Only after a memory
page (usually 4k) has been written to for the first time, it will
consume physical memory and appear in the "resident set" (i.e. RSS) of
top's/pmap's output.

The difference between allocating memory with or without MAP_NORESERVE
depends on the Linux memory overcommit configuration [1]. By default,
overcommit is allowed and memory allocated with MAP_NORESERVE isn't
checked against the available physical memory (see man proc(5) [2]).
If the HotSpot VM tries to commit reserved memory (i.e. re-map a
memory region without MAP_NORESERVE which was previously mapped with
MAP_NORESERVE) and there's not enough free memory available an
OutOfMemoyError will be thrown.

But even committing a memory region doesn't mean that physical memory
pages will be allocated for that region (and accounted in the
processes RSS) until that memory will be written to for the first
time. So depending on the overcommit settings, an application might
still crash with a SIGBUS because it is running out of physical memory
when touching memory for the first time which was committed a long
time ago.

The main problem with the current NMT output is that it can not
distinguish between touched and untouched Committed memory. If a VM is
started with -Xms1g -Xmx1g the VM will commit the whole 1g heap and
NMT will report Reserved=Committed=1g. In contrast, system tools like
ps/top will only show the part of the heap as RSS which has really
been used (i.e. touched), usually just about 100m. This is at least
confusing.

But we can do better. We can use mincore() [3] to find the RSS part of
the amount of memory which is accounted as Committed in NMT's output
and report that instead (or in addition). Notice that this feature has
already been implemented for threads stacks with "JDK-8191369: NMT:
Enhance thread stack tracking" [4] and just needs to be extended to
all other kinds of memory, starting with the Java heap.

Alternatively, instead of using mincore() we could use the information
from /proc/<pid>/smaps (also accessible through the pmap [5] command
line utility) directly and merge it with the NMT data to get a
complete, annotated overview of the whole address space of a Java
process.

[0] https://bugs.openjdk.java.net/browse/JDK-8249666
[1] https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
[2] https://man7.org/linux/man-pages/man5/proc.5.html
[3] https://man7.org/linux/man-pages/man2/mincore.2.html
[4] https://bugs.openjdk.java.net/browse/JDK-8191369
[5] https://man7.org/linux/man-pages/man1/pmap.1.html

> , and b) in many cases non heap committed
> memory is as big as heap committed.
>
> Please note, the analysis does not represent all possible use cases and
> rather represent web applications and services that are hosted at local
> cloud hosting providers. Also, most likely our analysis does not reflect
> the reality outside of Jelastic, because by default we optimize Java
> runtimes for the maximum efficiency in terms of memory usage, so I expect
> that the real world situation is worse.
>
>
>
> We are planning to add more containers to the analysis for getting an even
> bigger picture, but also I'm looking for other companies that can join this
> analysis. It will help the community to get a better understanding of the
> real (diverse and heterogeneous) Java world. Please let me know if you are
> willing to participate and I will share a bash script that collects the
> memory usage metrics (it will not collect sensitive or personal information
> as well as it's safe to run on productions).
>
> Looking forward to getting your feedback.
> Thanks
> --
> Ruslan Synytsky
> CEO @ Jelastic Multi-Cloud PaaS



More information about the hotspot-gc-dev mailing list