Using G1 with Apache Solr

Thomas Schatzl thomas.schatzl at oracle.com
Wed Mar 25 14:28:12 UTC 2015


Hi Kamran,

On Tue, 2015-03-24 at 17:48 -0400, Kamran Khawaja wrote:
> I'm running Solr 4.7.2 with Java 7u75 with the following JVM params:
>         -verbose:gc 
>         -XX:+PrintGCDateStamps 
>         -XX:+PrintGCDetails 
>         -XX:+PrintAdaptiveSizePolicy 
>         -XX:+PrintReferenceGC 
>         -Xmx3072m 
>         -Xms3072m 
>         -XX:+UseG1GC 
>         -XX:+UseLargePages 
>         -XX:+AggressiveOpts 
>         -XX:+ParallelRefProcEnabled 
>         -XX:G1HeapRegionSize=8m 
>         -XX:InitiatingHeapOccupancyPercent=35 

> What I'm currently seeing is that many of the gc pauses are under an
> acceptable 0.25 seconds but seeing way too many full GCs with an
> average stop time of 3.2 seconds.
> 
> You can find the gc logs
> here: https://www.dropbox.com/s/v04b336v2k5l05e/g1_gc_7u75.log.gz?dl=0

> 
> I initially tested without specifying the HeapRegionSize but that
> resulted in the "humongous" message in the gc logs and a ton of full
> gc pauses.
> 
> Any pointers or areas to further investigate would be appreciated.

The problem seems to be somewhat inconsistent survival rate in the young
gen. Most of the time, >5% of the young gen survives, while every now
and then >33% (or more) survives.

Just before these full gcs the heap seems already be fairly full, and
the existing mechanisms can not handle this.

There are a few things you could try:
- disable PLAB resizing (-XX:-ResizePLAB), as this may decrease the
amount of space that is actually required for copying.
- increase the evacuation reserve (-XX:G1ReservePercent=15; default is
10) which purpose is exactly some safety buffer for such cases.
- cap the maximum young generation size, so that even when a large part
of the young generation survives, this part is not that big. E.g.
G1MaxNewSizePercent=25 (which limits young gen size to 768M which seems
okay to me; default is 60; you also need to set -XX:
+UnlockExperimentalVMOptions in front of that)

Thanks,
  Thomas





More information about the hotspot-gc-use mailing list