Why does G1GC do Full GC when it only needs 40 bytes?

Martin Makundi martin.makundi at koodaripalvelut.com
Tue Jun 10 01:03:48 UTC 2014


>
>
>   1. "-XX:InitiatingHeapOccupancyPercent=0 should trigger concurrent gcs
> too much. "
>
>  We use InitiatingHeapOccupancyPercent to make sure the gc is always
> active. How is InitiatingHeapOccupancyPercent related to Full GC:s?
>
> InitiatingHeapOccupancyPercent determines when concurrent gcs are
> triggered, so that we can do mixed gc.  In your case,the heap usage after
> full gc is 15.5g, you can try to set this to 60.
>
> But this is not the reason why you get full gcs.
>
> The mixed gc should be able to clean more.
>

Thanks. Do you mean InitiatingHeapOccupancyPercent=0 is disabling some
features? I thought it simply triggers the clean to start earlier (always
on). Will try different values, but would like to know what harm =0 does?


> In the gc log, there are a lot of messages like:
> 1. "750.559: [G1Ergonomics (CSet Construction) finish adding old regions
> to CSet, reason: reclaimable percentage not over threshold, old: 38
> regions, max: 379 regions, reclaimable: 3172139640 bytes (9.99 %),
> threshold: 10.00 %]"
>
> the 10% threshold is controlled by -XX:G1HeapWastePercent=10.  You can try
> 5% so that more mixed gcs will happen after the concurrent cycles.
>

Thanks, will try that.


>  2. Depending on if target pause time =500 is critical.
> Currently, the gc pause is > 500ms.  If keeping gc pause <500 is critical,
> you need to decrease Eden size.
> We need to take another look at the gc logs then.
>
> If you can relax the target pause time, please increase it, so that we do
> not see too many
>

It's a web application so response time is important for users, will try 75
and GCPauseIntervalMillis=1000


>  " 5099.012: [G1Ergonomics (CSet Construction) finish adding old regions
> to CSet, reason: predicted time is too high, predicted time: 3.34 ms,
> remaining time: 0.00 ms, old: 100 regions, min: 100 regions]", maybe even
> to try,  -XX:G1MixedGCCountTarget=<4>.  This determines the minimum old
> regions added to CSet
>

Will try that too.


>
> This workload has high reference process time.  It is good that you have
> enabled parallel reference processing.   It seems the parallel work is
> distributed among the 13 gc threads well.  Though the work termination time
> is high.  If you are not cpu bound, you can try to increase the gc threads
> to 16 using -XX:ParallelGCThreads=16, ASSUMING you are not cpu bound, your
> system has 20-cpu threads?
>

There are 16 cpus so I assume there are 16 threads...

>
> There are a lot of system cpu activities, which might contribute to longer
> gc pause.  I used to see people reported this is due to logging.  Not sure
> if this applies to your case.
>

Which are these? Do you mean gc logging being on might contribute to
something?


>
>  2. "What is the jdk version?
>
>  java version "1.7.0_55"
> Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)
>
>
>  3. Can you share the gc log?"
>
> Log is available at 81.22.250.165/log
>
>  4. I wonder if it is possible to have mixed set of
> different/automatically varying  sizes of G1HeapRegionSize's small for
> smaller objects and larger for larger objects?
>
> Currently we do not have mixed region size.
>

Has this been tried in some gc?

**
Martin



>
>
>  **
> Martin
>
>
> 2014-06-07 11:03 GMT+03:00 Martin Makundi <
> martin.makundi at koodaripalvelut.com>:
>
>>  Why does G1GC do Full GC when it only needs only 40 bytes?
>>
>>  Is there a way to tune this so that it would try to free up "some"
>> chunk of memory and escape the full gc when enough memory has been freed?
>> This  would lower the freeze time?
>>
>>  See logs:
>>
>>    [Times: user=12.89 sys=0.16, real=1.25 secs]
>> 2014-06-06T17:11:48.727+0300: 34022.631: [GC
>> concurrent-root-region-scan-start]
>> 2014-06-06T17:11:48.727+0300: 34022.631: [GC
>> concurrent-root-region-scan-end, 0.0000180 secs]
>> 2014-06-06T17:11:48.727+0300: 34022.631: [GC concurrent-mark-start]
>>  34022.632: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
>> allocation request failed, allocation request: 40 bytes]
>>  34022.632: [G1Ergonomics (Heap Sizing) expand the heap, requested
>> expansion amount: 8388608 bytes, attempted expansion amount: 8388608 bytes]
>>  34022.632: [G1Ergonomics (Heap Sizing) did not expand the heap, reason:
>> heap expansion operation failed]
>> {Heap before GC invocations=1867 (full 1):
>>   garbage-first heap   total 31457280K, used 31453781K
>> [0x00007f1724000000, 0x00007f1ea4000000, 0x00007f1ea4000000)
>>   region size 8192K, 0 young (0K), 0 survivors (0K)
>>   compacting perm gen  total 524288K, used 166271K [0x00007f1ea4000000,
>> 0x00007f1ec4000000, 0x00007f1ec4000000)
>>    the space 524288K,  31% used [0x00007f1ea4000000, 0x00007f1eae25fdc8,
>> 0x00007f1eae25fe00, 0x00007f1ec4000000)
>> No shared spaces configured.
>> 2014-06-06T17:11:48.728+0300: 34022.632: [Full GC 29G->13G(30G),
>> 47.7918670 secs]
>>
>>
>>  Settings:
>>
>>  -server -XX:InitiatingHeapOccupancyPercent=0 -Xss4096k
>> -XX:MaxPermSize=512m -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc
>> -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX:+UseFastAccessorMethods
>> -XX:ReservedCodeCacheSize=128m -XX:-UseStringCache -XX:+UseGCOverheadLimit
>> -Duser.timezone=EET -XX:+UseCompressedOops -XX:+DisableExplicitGC
>> -XX:+AggressiveOpts -XX:CMSInitiatingOccupancyFraction=70
>> -XX:+ParallelRefProcEnabled -XX:+UseAdaptiveSizePolicy
>> -XX:MaxGCPauseMillis=500 -XX:+UseG1GC -XX:G1HeapRegionSize=8M
>> -XX:GCPauseIntervalMillis=10000 -XX:+PrintGCDetails -XX:+PrintHeapAtGC
>> -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps -XX:+PrintGC
>> -Xloggc:gc.log
>>
>
>
>
> _______________________________________________
> hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140610/f47035b8/attachment.html>


More information about the hotspot-gc-use mailing list