CMS large objects vs G1 humongous allocations
Thomas Schatzl
thomas.schatzl at oracle.com
Tue Jan 31 10:51:06 UTC 2017
Hi,
just commenting a bit on Vitaly's advice:
On Mon, 2017-01-30 at 13:49 +0000, Vitaly Davidovich wrote:
> Also, you can experiment with relaxing the pause time goal a bit. G1
> will use it as a heuristic to determine how many old regions (during
> mixed GC) to add to a collection. If you add too few, you're not
> reclaiming fast enough (potentially) for your allocation rate.
>
> Is your entire gc log too big? If not, might be good to attach it or
> put it somewhere (e.g. pastebin) so we can see the bigger picture.
Agree. It is definitely helpful to see more than just a single GC.
Often the context determines the appropriate response.
>
> On Mon, Jan 30, 2017 at 8:38 AM Vitaly Davidovich <vitalyd at gmail.com>
> wrote:
> > On Mon, Jan 30, 2017 at 8:19 AM, Amit Balode <amit.balode at gmail.com
> > > wrote:
> > > Italy, thank you.
> > >
> > > There is no explicit line which says Full GC. I thought line
> > > printed as "(Heap Sizing) did not expand the heap, reason: heap
> > > already fully expanded" is an implicit indication that G1 have
> > > taken all necessary actions which full GC would take and it lead
> > > to 7.7sec pause.
> > >
> > Oh really, there's no Full GC that ensues after those lines?
:) Sometimes trying to just continue opposed to start a full gc right
away pays off.
> >
> > The (Heap Sizing) output is G1's ergonomic output. It's saying
> > that it would like to expand the heap because it failed to allocate
> > a new region out of the existing heap. The reason for expansion is
> > G1 may start out with a heap smaller than capacity (i.e. max heap
> > size), and will try to expand the heap (if under capacity/max) as
> > needed. Here, it cannot expand because you're already at max heap
> > size.
> >
> > The 7.7s pause is due to to-space exhaustion, which basically means
> > G1 ran out of space to copy survivors (and you can see that this 7s
> > is all in the Object Copy phase of the pause). In my experience,
> > when you start seeing to-space exhaustion, you pretty soon see a
> > Full GC.
With JDK9 to-space exhaustion management got significantly faster. It
should be very close to a regular GC in most cases.
In JDK8, the only way to make this problem disappear is as mentioned by
Vitaly to avoid them, either by
- increasing the heap (to let marking have more time)
- decrease the initial heap occupancy (to start marking earlier) via
-XX:InitialHeapOccupancyPercent
- increase the speed of marking by increasing the number of marking
threads via -XX:ConcGCThreads.
> > > Sorry a bit confused about how can 'G1's concurrent cycles' &
> > > 'parallel phases of young collections' run concurrently? Does
> > > that mean ConcGCThreads + ParallelGCThreads cannot be greater
> > > than cpu cores.?
> > >
> > G1 has concurrent GC cycles, such as concurrent marking - this runs
> > in the background, on the ConcGCThreads, and marks old regions to
> > gather their liveness information (so that these regions can either
> > be cleaned up in the background, if fully empty, or they'll be
> > added to young GCs to make them mixed GCs). At the same time,
> > while that concurrent work is ongoing, your Java threads continue
> > to run and allocate. You can then hit a young pause, where
> > ParallelGCThreads will perform some of the parallel work associated
> > with a young GC. Young GCs look at eden regions (and do some
> > processing of them in parallel), while concurrent GC threads can be
> > processing old regions at the same time. So yes, you can
> > theoretically hit a case where you have ConcGCThreads +
> > ParallelGCThreads threads are runnable, and you may oversubscribe
> > the machine (assuming each of those threads don't stall themselves
> > due to internal issues, such as lock contention, etc).
The marking threads will always suspend during GC pauses, so this
situation can't occur.
Thanks,
Thomas
More information about the hotspot-gc-use
mailing list