CMS large objects vs G1 humongous allocations

Vitaly Davidovich vitalyd at gmail.com
Mon Jan 30 11:44:22 UTC 2017


I don't believe region sizes >32MB are planned, at least that wasn't the
case before - maybe that's changed though.

Some tuning options to consider:
1) Increase max heap size (this is always a blunt but frequently effective
instrument)
2) Turn InitiatingHeapOccupancyPercent down from its default 45 value.
3) Increase ConcGCThreads

What version of the JRE are you running? From the short GC log snippet you
pasted, it looks like G1 concurrent cycles are losing the race to your Java
threads (i.e. you exhaust the heap before concurrent phase can clean up).

How much does a full GC reclaim? You may want to provide more of your GC
log - someone is going to ask for that sooner or later :).


On Mon, Jan 30, 2017 at 3:24 AM Amit Balode <amit.balode at gmail.com> wrote:

> Hello, so the only reason we decided to move to G1 from CMS was due to
> fragmentation issues of CMS. After moving to G1, we have started humongous
> allocations resulting in Full GC's. We have some large objects which are
> allocated in successive order resulting into this issue but the use case
> for continious allocation from application side seems genuine. Heap is
> getting full so thats explainable why full GC is happening although this
> issue did not happen in CMS of full GC.
>
> a) Currently running with max 32MB region size, humongous allocations as
> high as 21MB are happening. So for such large allocation, 32MB seems
> smaller, might be 64MB would have been appropriate, is that an option
> available in future G1 releases?
> b) Given that application behaviour cannot be changed much which can
> stop continuous large allocation what are some G1 specific settings to tune
> to make it more resilient?
>
> Below is the snippet from application running with 16GB heap, 40ms pause
> time and 32MB region size.
> {code}
> 2017-01-29T14:53:14.770+0000: 189106.959: [GC pause (G1 Evacuation Pause)
> (young)
> Desired survivor size 654311424 bytes, new threshold 15 (max 15)
> - age   1:  240262896 bytes,  240262896 total
> - age   2:    3476760 bytes,  243739656 total
> - age   3:    3293240 bytes,  247032896 total
> - age   4:    3147072 bytes,  250179968 total
> - age   5:     420832 bytes,  250600800 total
> - age   6:     614688 bytes,  251215488 total
> - age   7:    1139960 bytes,  252355448 total
> - age   8:     632088 bytes,  252987536 total
> - age   9:     425488 bytes,  253413024 total
> - age  10:    1592608 bytes,  255005632 total
>  189106.960: [G1Ergonomics (CSet Construction) start choosing CSet,
> _pending_cards: 29363, predicted base time: 20.87 ms, remaining time: 19.13
> ms, target pause time: 40.00 ms]
>  189106.960: [G1Ergonomics (CSet Construction) add young regions to CSet,
> eden: 276 regions, survivors: 27 regions, predicted young region time:
> 12.11 ms]
>  189106.960: [G1Ergonomics (CSet Construction) finish choosing CSet, eden:
> 276 regions, survivors: 27 regions, old: 0 regions, predicted pause time:
> 32.98 ms, target pause time:
> 40.00 ms]
>  189106.961: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
> region allocation request failed, allocation request: 5549208 bytes]
>  189106.961: [G1Ergonomics (Heap Sizing) expand the heap, requested
> expansion amount: 5549208 bytes, attempted expansion amount: 33554432 bytes]
>  189106.961: [G1Ergonomics (Heap Sizing) did not expand the heap, reason:
> heap already fully expanded]
>  189114.730: [G1Ergonomics (Concurrent Cycles) do not request concurrent
> cycle initiation, reason: still doing mixed collections, occupancy:
> 13119782912 bytes, allocation request
> : 0 bytes, threshold: 7730941095 bytes (45.00 %), source: end of GC]
> 189114.730: [G1Ergonomics (Mixed GCs) start mixed GCs, reason: candidate
> old regions available, candidate old regions: 45 regions, reclaimable:
> 1026676456 bytes (5.98 %), threshold: 5.00 %]
>  (to-space exhausted), *7.7714626 secs*]
>    [Parallel Time: 7182.7 ms, GC Workers: 18]
> {code}
>
> --
> Thanks & Regards,
> Amit
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
-- 
Sent from my phone
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20170130/c78387e8/attachment-0001.html>


More information about the hotspot-gc-use mailing list