ParallelGC, large old generation when optimizing for footprint goal
Albert Yang
albert.m.yang at oracle.com
Fri Nov 8 09:08:51 UTC 2024
> I run with an allocation pressure of 512MB/sec
If the alloc-rate is 512M/s and init-heap-size is 512M, it's indeed expected that young-gc is frequent -- the default eden-size is ~150M.
> One can observe how young gen starts at ˜150MB, shrinks to ˜60MB, and old gen grows till it hits the ceiling at ˜5.5GB.
This is definitely undesirable, and as you put it, "runs counter to the footprint goal". I have been working on JDK-8338977, and the current prototype maintains heap-capacity under ~600M.
Thank you for providing this bm (and the config); I will include the result for this bm when I send out the PR.
/Albert
________________________________________
From: hotspot-gc-dev <hotspot-gc-dev-retn at openjdk.org> on behalf of Thomas Stüfe <thomas.stuefe at gmail.com>
Sent: Thursday, November 7, 2024 14:51
To: hotspot-gc-dev at openjdk.java.net
Subject: ParallelGC, large old generation when optimizing for footprint goal
Hi,
I have a question about some odd behavior I observe when ParallelGC optimizes for footprint.
If I omit giving a pause time goal and relax the throughput goal enough, the JVM should optimize for the footprint goal. But if the JVM was started with a small young gen (e.g. because the initial heap size was small), it seems to go into a tailspin where the young gen stays tiny or even shrinks more and more, resulting in lots of promotions, old gen grows until it hits the ceiling, Full GC, then the cycle repeats. That maximizes RAM use and thus runs counter to the footprint goal.
Example: I run heapothesys/hyperalloc [1] with JDK 21. I run with an allocation pressure of 512MB/sec and a live set size of 128MB.
`java -Xlog:gc* -Xmx8g -Xms512m -XX:+UseParallelGC -XX:GCTimeRatio=1 -jar ./target/HyperAlloc-1.0.jar -h 8192 -a 512 -s 128`
One can observe how young gen starts at ˜150MB, shrinks to ˜60MB, and old gen grows till it hits the ceiling at ˜5.5GB. Increasing the initial heap size mitigates the problem: Eden still shrinks but settles at a larger size. We still get very frequent young GCs, though.
Ironically, the problem is more likely on containers with little RAM. Eden size depends on initial heap size, which depends on total RAM (even if -Xmx was set). Little RAM -> tiny Eden. Therefore, less RAM can cause the JVM to use more memory. That behavior can easily be observed with different values for MaxRAM: calling above program with -XX:MaxRAM=10g will cause the JVM to enter the tailspin immediately, the process peaks at >5GB RSS. The same program with -XX:MaxRAM=128g causes the process to use just ~1.2GB RSS since the young gen stays sensibly large and thus total heap size never grows that much.
I looked into the tuning guide [2] but did not find information about how exactly the footprint goal is reached. For ParallelGC, it just states: "Footprint: The maximum heap footprint is specified using the option -Xmx<N>. In addition, the collector has an implicit goal of minimizing the size of the heap as long as the other goals are being met." which looks to me like it should work with default settings, out of the box.
Am I making a thinking error somewhere? Is this a bug or is this behavior expected?
Thank you,
Thomas
[1] https://github.com/corretto/heapothesys/tree/master/HyperAlloc
[2] https://docs.oracle.com/en/java/javase/11/gctuning/parallel-collector1.html#GUID-DCDD6E46-0406-41D1-AB49-FB96A50EB9CE
More information about the hotspot-gc-dev
mailing list