Any plans to increase G1 max region size to 64m?

Thomas Schatzl thomas.schatzl at oracle.com
Wed Feb 11 12:43:07 UTC 2015


Hi,

On Mon, 2015-02-09 at 12:41 +0100, Thomas Viessmann wrote:
> many thanks for your detailed explanation. Here are some more details
> about the background. It seems that a humongous allocation triggers a
> (long) evacuation pause although the eden is still (almost) empty.
> Such long pauses could only be observed during a humongous allocation.

In the case shown by the gc log snippet, the explanation is that G1
tries to start the concurrent cycle as soon as the occupancy in the old
gen reaches the threshold. It checks the threshold every time a
humongous allocation is done.

I.e. this works as implemented.

The cause for the long pauses seems to be a huge amount of reference
changes before allocating the humongous object (the large amount of
_pending_cards and the long update rs pause).

I do not know why particularly before humongous object allocation this
amount of pending cards is so high. There are two possibilities:
- it seems to be the application doing a large amount of reference
changes just before allocating that new large object; in the normal case
(ie. when reaching a young gc when it is full) this number may have
always been reduced by concurrent background activity to some manageable
amount.
- if that initial mark young gc happens right after a (final) mixed gc,
this would explain the high amount of pending cards too.

The single GC log entry does not give enough information to rule out
either of these options.

I created JDK-8072920 to record this problem.

> Here is an example from the gc.log
> 
>  173913.719: [G1Ergonomics (Concurrent Cycles) request concurrent
> cycle initiation, reason: occupancy higher than threshold, occupancy:
> 11710496768 bytes, allocation request: 33554448 bytes, threshold:

Side note: This large object is exactly 32M + 16 bytes (i.e. object
header) large. This means that this object actually requires 64M on the
heap. Maybe the application could be improved for that.

As for fixes/workaround for this problem assuming that I am correct
about the situation:

I do not think increasing the region size to 64M helps because the same
situation can occur for 64M humongous objects too. I am of course not
sure if the application ever tries to allocate 64M objects. Also, even
with a 64M region size, your 32M+16 byte object is still considered
humongous (i.e. the application would require 128M regions).

In case these pending cards originate from a preceding mixed young gc,
then a solution would be to increase the IHOP. I.e. give the background
processing more time between the last mixed gc and that forced initial
mark to process them to a useful level. I do not know if increasing IHOP
is acceptable in this situation.

If the application causes so many pending cards just before humongous
object allocation, I do not see a good way to prevent this occurrence
right now. Maybe somebody else has some idea.

>From the log snippet I also saw that the application is running with a
JDK7 build. 8u40's early reclaim may allow the application to have more
free space in general, doing less marking cycles (but the problem may
still occur in the current configuration).

Thanks,
  Thomas





More information about the hotspot-gc-dev mailing list