CMS large objects vs G1 humongous allocations
Amit Balode
amit.balode at gmail.com
Fri Feb 3 16:46:26 UTC 2017
Thomas, thanks a lot of inputs. I will try out those options as you and
Vitaly mentioned.
On Thu, Feb 2, 2017 at 4:37 PM, Thomas Schatzl <thomas.schatzl at oracle.com>
wrote:
> Hi,
>
> On Wed, 2017-02-01 at 19:18 +0530, Amit Balode wrote:
> > Hi Thomas, thanks for input.
> >
> > For "Every time this happens, the young gen is really large, however
> > it seems that according to heap size calculations the
> > surviving objects should actually have enough space." - Could you
> > paste the snippet from log which you referring to?
>
> [Eden: 8960.0M(8960.0M)->0.0B(288.0M) Survivors: 864.0M->512.0M
> Heap: 13.6G(16.0G)->2112.0M(16.0G)]
>
> [Eden: 8832.0M(8960.0M)->0.0B(800.0M) Survivors: 864.0M->0.0B Heap:
> 13.9G(16.0G)->11.6G(16.0G)]
>
> [Eden: 8960.0M(8960.0M)->0.0B(8544.0M) Survivors: 320.0M->512.0M
> Heap: 13.3G(16.0G)->2624.0M(16.0G)]
>
> [Eden: 8416.0M(9600.0M)->0.0B(9440.0M) Survivors: 224.0M->384.0M
> Heap: 13.1G(16.0G)->2392.0M(16.0G)]
>
> For the GCs that had evacuation failure.
>
> According to these lines the heap occupancy for those is e.g. 13.1G,
> i.e. quite a bit lower than 16G, which should in theory be enough to
> cover the promotion (looking at previous gcs, it is at most in the few
> 100MBs).
>
> (Caveat: there are a lot of assumptions in application behavior here)
>
> So the 13.1G (which means 2.9G free) may be somewhat misleading. It
> shows free memory, but not memory that can be allocated into. I could
> guess this is from humongous objects.
>
> So we are probably closer to full heap than we think we are.
>
> > "I remember discussing this or similar issues in the past, not sure
> > if it has been fixed in one way or another in the meantime." It would
> > really be great if you could help dig whether it has been fixed and
> > which release so we could try upgrading to it.
>
> One of the issues I remember is that garbage collection itself wasted
> quite a bit of heap with PLAB sizing (gc threads don't allocate object
> by object, but get memory to copy to in largish chunks, the PLABs, for
> various reasons); the existing young gen calculation mostly assumes
> that there is mostly no memory overhead because of this (but there are
> some "heuristics" in there of course).
>
> In memory tight situations this may cause that problem.
>
> This sometimes excessive java heap consumption during gc has been
> improved a lot with jdk9; further evacuation failures are very fast
> with that.
>
> One other option for any older release is the mentioned
> G1MaxNewSizePercent which basically limits the amount of data copied
> during gc (so that the other heuristics are good). Others are fixing
> PLAB size (potentially impacting gc performance), or increasing
> G1ReservePercent (the "heuristics" mentioned above).
>
> > good point regarding G1MaxNewSizePercent. In general, I have been
> > trying to avoid too many customization with G1 and let heuristics
> > decide for itself but if no option, I will try to put this setting
> > and experiment.
>
> We recommend to at least try without options with G1. Very very often
> they are quite successful in achieving their goals.
>
> Thanks,
> Thomas
>
>
--
Thanks & Regards,
Amit.Balode
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20170203/bb6f050f/attachment.html>
More information about the hotspot-gc-use
mailing list