Well, it would be ideal if the heuristics didn’t consider max heap at all. Aleksandr Dubinsky On Sep 2, 2021, 19:50 +0300, Aleksey Shipilev <shade@redhat.com>, wrote:
On 9/2/21 6:32 PM, alex@syncwords.com wrote:
I did some playing around. Turns out it's kind of expensive to minimize mem usage when there's no young generation.
Yup, it is throughput-latency-footprint tradeoff. Generational would make it much less painful, but it would not solve it completely.
I tried `-J-XX:ShenandoahGCHeuristics=compact` and that didn't actually work very well. My test is to set Xmx to 16GB and launch Netbeans. Shenandoah didn't GC often enough, letting mem usage shoot up 3x the size of the live set. One of the things that heuristics sets is -XX:ShenandoahAllocationThreshold=10:
product(uintx, ShenandoahAllocationThreshold, 0, EXPERIMENTAL, \ "How many new allocations should happen since the last GC cycle " \ "before some heuristics trigger the collection. In percents of " \ "(soft) max heap size. Set to zero to effectively disable.") \ range(0,100) \
With -Xmx16G that would mean about 1.6 GB allocated before a GC triggers. Might help to additionally lower it for better compaction and even more GC throughput overhead :) This is one of those inputs that could be fed into prospective -z.
-- Thanks, -Aleksey