HDFS Namenode with large heap size
Thomas Schatzl
thomas.schatzl at oracle.com
Sat Feb 9 13:08:23 UTC 2019
Hi,
some minor additions:
On Sat, 2019-02-09 at 13:47 +0100, Thomas Schatzl wrote:
> Hi Fengnan,
>
> while I am responding to this email, I will also give explanations
> for other questions that came up in this thread already.
>
> Btw, if you ask for help it would be nice to at least mention the JDK
> version you are using :) - this matters a lot as you will see.
>
> On Fri, 2019-02-08 at 13:49 -0800, Fengnan Li wrote:
> > Hi All,
> >
> > We are trying to use G1 for our HDFS Namenode to see whether it
> > will deliver better GC overall than currently used CMS. However,
> > with the 200G heap size JVM option, the G1 wouldn’t even start our
> > namenode with the production image and will be killed out of memory
> > after running for 1 hours (loading initial data). For the same heap
> > size, CMS can work properly with around 98% throughput and
> > averagely 120ms pause.
What does "killed out of memory mean"? An uncaught OOM exception in the
Java program or the OS killing it? What is the Java heap usage for CMS
in this case? Can you provide GC logs?
Also, for generic tuning questions and answer, please have a look at
the official documentation and tuning guide for G1 [0]. That one is for
11, but most advice still applies on 8u except for particular logging
flags/output.
Thanks,
Thomas
[0]
https://docs.oracle.com/en/java/javase/11/gctuning/garbage-first-garbage-collector.html#GUID-ED3AB6D3-FD9B-4447-9EDF-983ED2F7A573
More information about the hotspot-gc-use
mailing list