HDFS Namenode with large heap size

Fengnan Li lfengnan at uber.com
Fri Feb 8 21:49:39 UTC 2019


Hi All,

We are trying to use G1 for our HDFS Namenode to see whether it will deliver better GC overall than currently used CMS. However, with the 200G heap size JVM option, the G1 wouldn’t even start our namenode with the production image and will be killed out of memory after running for 1 hours (loading initial data). For the same heap size, CMS can work properly with around 98% throughput and averagely 120ms pause.

We use pretty much the basic options, and tried to tune a little but not much progress. Is there a way to lower down the overall memory footprint for G1?

We managed to start the application with 300G heap size option, but overall G1 will consume about 450G memory, which is problematic.

Thanks,
Fengnan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2710 bytes
Desc: not available
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20190208/0b86182e/smime.p7s>


More information about the hotspot-gc-use mailing list