G1: no concurrent cycle initiation for humongous allocation

Simone Bordet simone.bordet at gmail.com
Mon Oct 14 13:18:42 PDT 2013


Hi,

On Mon, Oct 14, 2013 at 12:07 PM, Thomas Schatzl
<thomas.schatzl at oracle.com> wrote:
> If a humongous/large object allocation makes the occupied heap (after
> allocation of the humongous object) larger than the threshold, G1 tries
> to schedule a concurrent cycle.
>
> At the same time G1 still in the mixed-gc phase (cleaning up the regular
> heap), and the collection policy does not allow initiation of a
> concurrent cycle during that phase.
>
> The message in essence just indicates that.
>
> Is there any problem, apart from the messages?

See the logs excerpt below.

What I think it happens is that the end of the concurrent cycle cleans
up (as in the "GC Cleanup" phase) a large number of regions because
they are all garbage (early promotions from young since young is
small).
The GC Cleanup phase moves from 509->268 MiB.

Then some time passes (~50s: from 989 to 1041); during this time young
has not been filled (even if it's only 140 MiB), yet at 1041.161 G1
reports that 643 MiB are now occupied.

After 1041.161, other 136 lines of the same type happen (and nothing
else), all of them within 32s, all of them requesting the same
allocation size.
The last of these lines happens at 1073.224, showing that during those
32s (1073-1041) ~288 MiB have been allocated (931-643 MiB), probably
all of them as humongous object directly in old generation (because no
young GC has been triggered despite its small size).
If the same happened during the prior ~50s between the GC Cleanup and
1041.161, then young would still be "empty" but it would explain why a
concurrent cycle would have been requested: the heap is actually
indeed over IHOP because of humongous allocations.

At the beginning I could not believe that only humongous allocations
were happening, but it seems to be the case.

I guess this case is not handled well with the default region size,
but it's a good case where setting the region size to 3 MiB (because
the humongous allocations seems all to be of size 1438336=1.4 MiB)
would "solve" this problem.

If I am wrong in my interpretation, I'd be happy to hear.

Thanks !

-- 
Simone Bordet
http://bordet.blogspot.com
---
Finally, no matter how good the architecture and design are,
to deliver bug-free software with optimal performance and reliability,
the implementation technique must be flawless.   Victoria Livschitz


 989.437: [G1Ergonomics (Concurrent Cycles) request concurrent cycle
initiation, reason: occupancy higher than threshold, occupancy:
644874240 bytes, allocation request: 1438336 bytes, threshold:
644245080 bytes (60.00 %), source: concurrent humongous allocation]
 989.438: [G1Ergonomics (Concurrent Cycles) request concurrent cycle
initiation, reason: requested by GC cause, GC cause: G1 Humongous
Allocation]
 989.438: [G1Ergonomics (Concurrent Cycles) initiate concurrent cycle,
reason: concurrent cycle initiation requested]
2013-09-30T18:07:10.376+0200: [GC pause (young) (initial-mark)
989.439: [G1Ergonomics (CSet Construction) start choosing CSet,
_pending_cards: 6593, predicted base time: 68.56 ms, remaining time:
31.44 ms, target pause time: 100.00 ms]
 989.439: [G1Ergonomics (CSet Construction) add young regions to CSet,
eden: 140 regions, survivors: 4 regions, predicted young region time:
22.72 ms]
 989.439: [G1Ergonomics (CSet Construction) finish choosing CSet,
eden: 140 regions, survivors: 4 regions, old: 0 regions, predicted
pause time: 91.28 ms, target pause time: 100.00 ms]
, 0.0879000 secs]
   [Parallel Time: 86.0 ms, GC Workers: 2]
      [GC Worker Start (ms): Min: 989439.0, Avg: 989439.7, Max:
989440.3, Diff: 1.3]
      [Ext Root Scanning (ms): Min: 55.7, Avg: 58.9, Max: 62.2, Diff:
6.5, Sum: 117.8]
      [Update RS (ms): Min: 0.0, Avg: 4.5, Max: 8.9, Diff: 8.9, Sum: 8.9]
         [Processed Buffers: Min: 0, Avg: 31.0, Max: 62, Diff: 62, Sum: 62]
      [Scan RS (ms): Min: 0.0, Avg: 1.3, Max: 2.6, Diff: 2.5, Sum: 2.6]
      [Object Copy (ms): Min: 18.7, Avg: 20.6, Max: 22.4, Diff: 3.7, Sum: 41.1]
      [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
      [GC Worker Total (ms): Min: 84.6, Avg: 85.3, Max: 85.9, Diff:
1.3, Sum: 170.6]
      [GC Worker End (ms): Min: 989524.9, Avg: 989524.9, Max:
989524.9, Diff: 0.0]
   [Code Root Fixup: 0.0 ms]
   [Clear CT: 0.2 ms]
   [Other: 1.8 ms]
      [Choose CSet: 0.3 ms]
      [Ref Proc: 0.7 ms]
      [Ref Enq: 0.1 ms]
      [Free CSet: 0.2 ms]
   [Eden: 140.0M(274.0M)->0.0B(194.0M) Survivors: 4096.0K->7168.0K
Heap: 638.0M(1024.0M)->501.4M(1024.0M)]
 [Times: user=0.12 sys=0.00, real=0.09 secs]
2013-09-30T18:07:10.464+0200: [GC concurrent-root-region-scan-start]
2013-09-30T18:07:10.486+0200: [GC concurrent-root-region-scan-end,
0.0217720 secs]
2013-09-30T18:07:10.486+0200: [GC concurrent-mark-start]
2013-09-30T18:07:11.002+0200: [GC concurrent-mark-end, 0.5160090 secs]
2013-09-30T18:07:11.002+0200: [GC remark 2013-09-30T18:07:11.003+0200:
[GC ref-proc, 0.0038790 secs], 0.0398310 secs]
 [Times: user=0.04 sys=0.00, real=0.04 secs]
2013-09-30T18:07:11.044+0200: [GC cleanup 509M->268M(1024M), 0.0141640 secs]
 [Times: user=0.02 sys=0.00, real=0.02 secs]
2013-09-30T18:07:11.058+0200: [GC concurrent-cleanup-start]
2013-09-30T18:07:11.059+0200: [GC concurrent-cleanup-end, 0.0007930 secs]
 1041.161: [G1Ergonomics (Concurrent Cycles) do not request concurrent
cycle initiation, reason: still doing mixed collections, occupancy:
643825664 bytes, allocation request: 1438336 bytes, threshold:
644245080 bytes (60.00 %), source: concurrent humongous allocation]
... Other 135 lines of the same format
 1073.224: [G1Ergonomics (Concurrent Cycles) do not request concurrent
cycle initiation, reason: still doing mixed collections, occupancy:
931135488 bytes, allocation request: 1438336 bytes, threshold:
644245080 bytes (60.00 %), source: concurrent humongous allocation]


More information about the hotspot-gc-use mailing list