SoftMaxHeapSize has no effect with Cassandra
Kornel Pal
kornelpal at gmail.com
Sun Dec 13 23:15:55 UTC 2020
Hi,
Thank you for implementing SoftMaxHeapSize, it greatly simplifies
keeping the heap utilization low while also providing a buffer for
allocation spikes.
I've tried it with Cassandra using the latest
openjdk-shenandoah-jdk8-linux-x86_64-server-release.tar.xz nightly build
and the default adaptive heuristics, but unfortunately
ShenandoahSoftMaxHeapSize does not seem to have any effect on the GC.
Setting ShenandoahMinFreeThreshold=50 on the other hand resulted in a
behavior similar to what I would expect from setting
ShenandoahSoftMaxHeapSize to half the size of the heap.
Cassandra is allocating short-lived objects at a high rate, that might
be a different allocation pattern than what SoftMaxHeapSize was tested with.
I am not very familiar with the Shenandoah code, but I think that while
should_start_gc() removes soft tail from the available heap,
choose_collection_set_from_regiondata() is not removing soft tail from
actual_free (or increasing free_target with the soft tail), that results
in no GC being performed, even when heap utilization is above the soft
limit.
Could you please check whether ShenandoahSoftMaxHeapSize in sh/8u is
behaving as expected.
Thank you,
Kornel
More information about the shenandoah-dev
mailing list