G1 issue: falling over to Full GC
Simone Bordet
sbordet at intalio.com
Thu Nov 1 15:27:33 PDT 2012
Hi,
On Thu, Nov 1, 2012 at 10:51 PM, Andreas Müller
<Andreas.Mueller at mgm-tp.com> wrote:
> Hi all,
>
> I have tested G1 for our portal (using Java7u7 on Solaris 10 /SPARC).
>
> The JVM is using a rather small heap of 1GB and the amount of garbage is
> moderate (in the range of 30-35 MB/s).
>
> ParallelGC and CMS have no problem to cope with that load, but to get rid of
> the Full GC pauses (around 4s with ParallelGC) and to avoid any
> fragmentation risk (uptime is many weeks) I tried G1, too.
>
> The good news is that G1 has improved a lot since Java6 and now looks more
> ready to compete with the proven collectors.
>
> The good case JPEG (I’ll send in a second mail) shows the GC pauses (in
> seconds) as a function of time when I ran it with the following heap and GC
> settings:
>
> -Xms1024m -Xmx1024m -XX:NewSize=400m -XX:MaxNewSize=400m
> -XX:SurvivorRatio=18 -XX:+UseG1GC -XX:MaxGCPauseMillis=500.
>
> As a result, after an outlier during startup the longest GC pauses are
> shorter than with ParallelGC and the average pause is clearly shorter than
> the 500ms target.
>
> Some pauses are in the 1-2s range and I hoped to eliminate them by fine
> tuning the settings.
>
> Now, here starts the bad news: fine tuning proved more difficult than
> expected.
>
> I hoped to also half the longest pauses by reducing the pause time target to
> 250ms and therefore applied the following settings (leaving Xms, Xmx and
> NewSize unchanged):
>
> -XX:SurvivorRatio=6 -XX:MaxGCPauseMillis=250 -XX:GCPauseIntervalMillis=2000
> -XX:InitiatingHeapOccupancyPercent=80
>
> Making survivor spaces larger had proven positive with ParallelGC and CMS.
> So I also used it here. I also wanted to make better use of the available
> heap and therefore set the threshold to 80 percent. The result was kind of a
> disaster: after a benign start pause times quickly rose to the range of
> 1-4(!)s and later G1 fell over to Full GCs (have a glance at the bad case
> JPEG).
>
> Here are my questions:
>
> - What did I wrong? Which setting was my biggest error and why?
When you specify manually the eden size, you basically disable G1's
target GC pause, which is ignored (because if you set eden size, you
know better than G1).
Have you tried to *not* specify the newsize nor the survivor ratio and
let G1 do its work ?
> - What settings would you suggest to reach my goal of having very few
> pauses above 1s?
>
> - Does SurvivorRatio have the same meaning for G1 as for the
> traditional collectors?
>
> - With G1, is it suitable to set the occupancy threshold to similar
> values as with CMS? (80 worked fine with CMS in the same test )
No. In CMS the threshold was for the old generation, in G1 is for the
whole heap. 80% is probably a bit too high.
> - Why are Full GC pauses with the failed G1 so much longer than with
> ParallelGC?
I /think/ G1's full GC are single threaded.
> - I noticed in the logs that Full GC pauses take sometimes 50s of
> real time and only 12s of usr time. How come? I have never seen the other
> collectors idling on their time.
Swapping ?
> I will attach a GC log file with GCDetails to a third mail to avoid breaking
> the 100k limit on mails to this list.
Those would help.
I'm using:
-XX:+PrintGCDetails
-XX:+PrintAdaptiveSizePolicy
to print out interesting G1 logs.
Simon
--
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
----
Finally, no matter how good the architecture and design are,
to deliver bug-free software with optimal performance and reliability,
the implementation technique must be flawless. Victoria Livschitz
More information about the hotspot-gc-use
mailing list