Shenandoah performance problem

Aleksey Shipilev shade at redhat.com
Tue Oct 22 12:00:00 UTC 2019


On 10/22/19 1:44 PM, Attila Axt wrote:
>>> https://imgur.com/S2vEmnL
>> I believe this shows that non-generational Shenandoah uses more CPU than a generational CMS: so the
>> GC cycles are longer on average, and then the total CPU time is larger. For single-threaded
>> workload, the heap occupancy might be much lower, so GC cycle length is smaller?
>>
>> It would be interesting to zoom in and see if CPU utilization differs when GC is idle to confirm.
> 
> Sorry, I wasn't clear enough about the measurement setup. All those graphs are measured the same time.
> 
> We are running a recommendation service. The incoming traffic is load-balanced between serveral
> application servers. I did the measurement on two of those servers, one configured with CMS, the
> other configured with Shenandoah.  These two nodes are receiving roughly the same traffic.

Ah, ok, this is a very nice setup for performance experiments. So the graphs say that http-main-#
threads burn about 2x more cycles with Shenandoah, while "Process queue" burn about 1.2x more, in
comparison with the same node running CMS.

In our experiments, that is usually explained by the barrier overhead. I think the difference
between these group of threads are in the way they use the heap, not necessarily how many threads
are running (and contending). If threads deal Java references more, they would experience more
barrier overhead.

To quantify the barrier overhead, Shenandoah can be configured to run in STW mode with
-XX:ShenandoahGCMode=passive. It would run pretty much the same GC cycle, but under the pause. That
would affect latencies, but this allows disabling the barriers for diagnostics. "passive" would
disable all barriers ergonomically, and the GC log would also tell which barriers got disabled. You
can then selectively turn them on and see if overhead is explained by the particular barrier class.

-- 
Thanks,
-Aleksey



More information about the shenandoah-dev mailing list