CMS large objects vs G1 humongous allocations

Amit Balode amit.balode at gmail.com
Mon Jan 30 08:24:09 UTC 2017


Hello, so the only reason we decided to move to G1 from CMS was due to
fragmentation issues of CMS. After moving to G1, we have started humongous
allocations resulting in Full GC's. We have some large objects which are
allocated in successive order resulting into this issue but the use case
for continious allocation from application side seems genuine. Heap is
getting full so thats explainable why full GC is happening although this
issue did not happen in CMS of full GC.

a) Currently running with max 32MB region size, humongous allocations as
high as 21MB are happening. So for such large allocation, 32MB seems
smaller, might be 64MB would have been appropriate, is that an option
available in future G1 releases?
b) Given that application behaviour cannot be changed much which can
stop continuous large allocation what are some G1 specific settings to tune
to make it more resilient?

Below is the snippet from application running with 16GB heap, 40ms pause
time and 32MB region size.
{code}
2017-01-29T14:53:14.770+0000: 189106.959: [GC pause (G1 Evacuation Pause)
(young)
Desired survivor size 654311424 bytes, new threshold 15 (max 15)
- age   1:  240262896 bytes,  240262896 total
- age   2:    3476760 bytes,  243739656 total
- age   3:    3293240 bytes,  247032896 total
- age   4:    3147072 bytes,  250179968 total
- age   5:     420832 bytes,  250600800 total
- age   6:     614688 bytes,  251215488 total
- age   7:    1139960 bytes,  252355448 total
- age   8:     632088 bytes,  252987536 total
- age   9:     425488 bytes,  253413024 total
- age  10:    1592608 bytes,  255005632 total
 189106.960: [G1Ergonomics (CSet Construction) start choosing CSet,
_pending_cards: 29363, predicted base time: 20.87 ms, remaining time: 19.13
ms, target pause time: 40.00 ms]
 189106.960: [G1Ergonomics (CSet Construction) add young regions to CSet,
eden: 276 regions, survivors: 27 regions, predicted young region time:
12.11 ms]
 189106.960: [G1Ergonomics (CSet Construction) finish choosing CSet, eden:
276 regions, survivors: 27 regions, old: 0 regions, predicted pause time:
32.98 ms, target pause time:
40.00 ms]
 189106.961: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason:
region allocation request failed, allocation request: 5549208 bytes]
 189106.961: [G1Ergonomics (Heap Sizing) expand the heap, requested
expansion amount: 5549208 bytes, attempted expansion amount: 33554432 bytes]
 189106.961: [G1Ergonomics (Heap Sizing) did not expand the heap, reason:
heap already fully expanded]
 189114.730: [G1Ergonomics (Concurrent Cycles) do not request concurrent
cycle initiation, reason: still doing mixed collections, occupancy:
13119782912 bytes, allocation request
: 0 bytes, threshold: 7730941095 bytes (45.00 %), source: end of GC]
189114.730: [G1Ergonomics (Mixed GCs) start mixed GCs, reason: candidate
old regions available, candidate old regions: 45 regions, reclaimable:
1026676456 bytes (5.98 %), threshold: 5.00 %]
 (to-space exhausted), *7.7714626 secs*]
   [Parallel Time: 7182.7 ms, GC Workers: 18]
{code}

-- 
Thanks & Regards,
Amit
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20170130/38bb621a/attachment.html>


More information about the hotspot-gc-use mailing list