SoftMaxHeapSize has no effect with Cassandra
Aleksey Shipilev
shade at redhat.com
Wed Jan 6 11:50:47 UTC 2021
Hi,
On 12/14/20 12:15 AM, Kornel Pal wrote:
> Thank you for implementing SoftMaxHeapSize, it greatly simplifies
> keeping the heap utilization low while also providing a buffer for
> allocation spikes.
>
> I've tried it with Cassandra using the latest
> openjdk-shenandoah-jdk8-linux-x86_64-server-release.tar.xz nightly build
> and the default adaptive heuristics, but unfortunately
> ShenandoahSoftMaxHeapSize does not seem to have any effect on the GC.
> Setting ShenandoahMinFreeThreshold=50 on the other hand resulted in a
> behavior similar to what I would expect from setting
> ShenandoahSoftMaxHeapSize to half the size of the heap.
Please post the full list of GC flags you are using? And maybe the excerpts from the GC logs that
show unexpected (with ShenandoahSoftMaxHeapSize setting) and expected (with
ShenandoahMinFreeThreshold setting) behaviors?
This is only to confirm that we are seeing the same issue.
> Cassandra is allocating short-lived objects at a high rate, that might
> be a different allocation pattern than what SoftMaxHeapSize was tested with.
>
> I am not very familiar with the Shenandoah code, but I think that while
> should_start_gc() removes soft tail from the available heap,
> choose_collection_set_from_regiondata() is not removing soft tail from
> actual_free (or increasing free_target with the soft tail), that results
> in no GC being performed, even when heap utilization is above the soft
> limit.
>
> Could you please check whether ShenandoahSoftMaxHeapSize in sh/8u is
> behaving as expected.
I think you are right here. This bug manifests when heap is fragmented enough: the GC cycle
triggers, but finds no candidate regions to compact the heap to the new soft max target.
Filed:
https://bugs.openjdk.java.net/browse/JDK-8259310
--
Thanks,
-Aleksey
More information about the shenandoah-dev
mailing list