Troubles with Shenandoah
Simone Bordet
simone.bordet at gmail.com
Mon Apr 8 19:21:57 UTC 2019
Hi,
On Mon, Apr 8, 2019 at 8:31 PM Aleksey Shipilev <shade at redhat.com> wrote:
>
> On 4/7/19 2:41 PM, Simone Bordet wrote:
> > I will happily re-run the benchmark with your suggestions.
>
> After running the benchmark myself, at least one trouble I see is here:
>
> > -XX:InitialHeapSize=17179869184 -XX:MaxHeapSize=51539607552
> > -XX:+PrintCommandLineFlags -XX:ReservedCodeCacheSize=251658240
> > -XX:+SegmentedCodeCache -XX:+UnlockExperimentalVMOptions
> > -XX:+UseShenandoahGC
>
> The benchmark calls System.gc() between benchmark rounds, which uncommits almost entire Shenandoah
> heap, which is intentional, because Shenandoah treats System.gc() as the command to compact the heap
> as densely as possible. But this also means the next benchmarking round has to commit all that
> memory back. This gives you the latency hit, which probably manifests as timeouts/lost packets too?
Timeouts and lost packets would only happen if the latency/pause is
quite long, 5+ seconds.
Could recommitting the memory cause pauses this long?
With the default configuration and at least 2 GiB of heap, I expect
the failures to be very unlikely, especially if there is no network
(e.g. run via loopback).
> The timing distribution had improved very significantly after either -XX:-ShenandoahUncommit or
> -XX:+AlwaysPreTouch is supplied.
Nice.
> For comparison across GCs and OSes that have different takes on heap management, I would run with
> "-Xms${H}g -Xmx${H}g -XX:+AlwaysPreTouch" to stick the heap at max capacity and wired up at all
> times. This advice applies to Shenandoah [1], as well as other OpenJDK collectors.
Thanks for this tip.
> I also note the workload is fully-young, so differences between current GCs would probably be small.
Indeed. The client connections and session objects are long lived, so
the case is a lot of objects that die young, referenced by objects
that are long lived.
Not explicitly a GC benchmark, but for me just a litmus test for the
JVM behavior (GC, JIT, etc.) as well as Jetty and CometD behavior.
The benchmark is pretty close to a real application (rather than being
synthetic code) and we use it to flamegraph Jetty, CometD, etc.
It's what I had handy :)
--
Simone Bordet
---
Finally, no matter how good the architecture and design are,
to deliver bug-free software with optimal performance and reliability,
the implementation technique must be flawless. Victoria Livschitz
More information about the shenandoah-dev
mailing list