G1gc compaction algorithm
Thomas Schatzl
thomas.schatzl at oracle.com
Mon Aug 25 09:06:17 UTC 2014
Hi all,
On Fri, 2014-08-22 at 05:12 +0300, Martin Makundi wrote:
> > I suspect the application does not do the humongous
> > allocations, I suspect it's the gc itself that does these.
> > We have an allocationrecorder that never sees these
> > humongous allocations within the application
> > itself...assuming they are in a form that the allocation
> > hook can detect.
>
> It is unlikely g1 is doing this. Is it possible that the one
> you recorded did not have those requests?
The collector is never a source for large object allocations in the Java
heap (unless you enable some testing/debug options).
> > From within the application we have lots of objects that are
> > kept in ehcache, so ehcache manages the usage. I am not
> > familiar with ehcache internals but I don't think it uses
> > humongous objects in any way.
We have never had an issue about the GC misreporting large object
allocations. The likelihood for that to be a problem is very low.
> Ok, will try this -Xms5G -Xmx30G. Is there some option to make jvm
> shrink/release memory usage more aggressively? Will try
> options -XX:MaxHeapFreeRatio=35 -XX:MinHeapFreeRatio=10 though I am
> not sure if they have effect only after full gc.
>
> My first impression is G1ReservePercent should help if we
> increase from default(10). But I am not sure due to the
> following: The 1st Full gc happened due to humongous
> allocation, g1 can not find 7 consecutive regions to satisfy
> that allocation. If G1 leaves G1ReservePercent not touched,
> then it should be able to find 7 regions. In other words,
> G1ReservePercent=10 should be enough, unless the reserved is
> not kept in chunk.
>
> What does G1ReservePercent affect? Is it reducing fragmentation i.e.,
> whenever new allocations are made they are attempted below the
> G1ReservePercent or is it a hard limit for the available memory? I.e.,
> how is it different from simply reducing Xmx?
The allocation reserve is some memory kept back to be used by evacuation
so that no to-space exhaustion can occur.
I.e. the gc is started earlier than required to avoid that it cannot
find space for objects that are evacuated.
This is not necessarily a contiguous amount of space, so the impact
might be minimal here.
> > My main concern is that IF full gc can clean up the memory,
> > there should be a mechanic that does just the same as full
> > gc but without blocking for long time...concurrent full gc,
> > do the 30-60 second operation for example 10% overhead until
> > whole full gc is done (that would take 30-60/10% = 300-600
> > seconds).
> The reason for the 1st Full gc in 08-15 log:
> "2014-08-15T10:25:10.637+0300: 112485.906: [Full GC
> 20G->15G(31G), 58.5538840 secs]
> [Eden: 0.0B(1984.0M)->0.0B(320.0M) Survivors: 192.0M->0.0B
> Heap: 20.9G(31.5G)->15.9G(31.5G)]"
> G1 tries to meet the humongous allocation requests, but could
> not find continuous empty regions. Note that the heap usage
> is only 20.9G. But there is no consecutive regions to hold
> 109816768 bytes.
>
> The rest of the Full gc happened due to 'to-space exhausted'.
> It could be the heap usage is that high. Note after the 2nd
> full gc, the heap usage is 27g, and the young gcs before that
> can not clean at all.
>
> Another reason for full gc can clean more, is classes are not
> unloaded till a full gc. This is fixed in later jdk8 and jdk9
> versions.
>
>
> Class unloading is disabled in our setup so this should not affect. I
> still think incremental full gc should be happening concurrently all
> the time instead of an intermittent long pause
> (http://www.azulsystems.com/technology/c4-garbage-collector).
The intermittent long pauses are due to g1 not handling the workload you
have not well. Please try the recently made available 8u40-b02 EA
(https://jdk8.java.net/download.html) which contains a few fixes which
will help your application. Some of the tuning suggested so far will
have a bad consequences, so it is probably best to more or less start
from scratch with g1 options.
Thanks,
Thomas
More information about the hotspot-gc-use
mailing list