G1 GC heap size is not bounded ?
YU ZHANG
yu.zhang at oracle.com
Tue Feb 18 22:45:23 PST 2014
Shengzhe,
All gcs have some memory overhead. G1 might have more compared to other
gcs.
Can you share your data about what have increased?
Thanks,
Jenny
On 2/18/2014 11:41 AM, yao wrote:
> Hi All,
>
> We've tracked our system memory usage for a week and found it seems
> stop increasing when reaching 100G. Besides RSet, it looks like G1
> also allocates other data structure in off-heap manner. I have a
> follow up question, could we restrict the application memory usage
> equal to what we set in -Xmx when G1 is enabled ? Assume application
> itself doesn't allocate data in off-heap manner, that will make
> application memory usage more predictable.
>
> Thanks
> Shengzhe
>
>
> On Mon, Feb 10, 2014 at 2:06 PM, yao <yaoshengzhe at gmail.com
> <mailto:yaoshengzhe at gmail.com>> wrote:
>
> Hi Ramki,
>
> JDK version is 1.7.0_40
>
> -Shengzhe
>
>
> On Mon, Feb 10, 2014 at 1:36 PM, Srinivas Ramakrishna
> <ysr1729 at gmail.com <mailto:ysr1729 at gmail.com>> wrote:
>
> Hi Shengzhe --
>
> What's the version of JDK where you're running into this
> issue, and has the JVM had any STW full gc's because of mixed
> gc's not keeping up?
>
> -- ramki
>
>
> On Mon, Feb 10, 2014 at 11:39 AM, yao <yaoshengzhe at gmail.com
> <mailto:yaoshengzhe at gmail.com>> wrote:
>
> Hi All,
>
> We've enabled G1 GC on our cluster for about 1 month and
> recently we observed the heap size keeps growing (via RES
> column in top), though very slowly. My question is, is
> there a way to bound heap size for G1 GC ?
>
> We set heap size to 82G
>
> /-Xms83868m -Xmx83868m -XX:+UnlockExperimentalVMOptions
> -XX:+UseG1GC
> /
> We found RES column is about 100G, (a few days ago it was
> about 93G)
>
> *$ top*
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
> COMMAND
> 5757 hbase 20 0 104g *100g* 5240 S 271.3 79.6
> 177771:41 java
>
> From previous discussion, Thomas Schatzl pointed out this
> might be due to large RSet. From below lines in gc log, we
> found RSet size is about 10.5G. So we get Xmx + RSet = 82G
> + 10.5G = 92.5G, here you can see there are still
> unexplained 7.5G data occupied our off-heap.
>
> RSet log:
>
> / Concurrent RS processed -1601420092 cards
> Of 651507426 completed buffers:
> 634241940 ( 97.3%) by conc RS threads.
> 17265486 ( 2.7%) by mutator threads.
> Conc RS threads times(s)
> 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00 0.00 0.00 0.00 0.00 \
> 0.00 0.00 0.00 0.00 0.00 0.00
> Total heap region rem set sizes = 10980692K. Max = 16182K.
> Static structures = 563K, free_lists = 78882K.
> 197990656 occupied cards represented.
> Max size region =
> 165:(O)[0x00007f0ce0000000,0x00007f0ce2000000,0x00007f0ce2000000],
> size = 16183K, occupied = 3474K.
> Did 0 coarsenings./
>
> Thanks
> -Shengzhe
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> <mailto:hotspot-gc-use at openjdk.java.net>
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>
>
>
>
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140218/52b97c52/attachment.html
More information about the hotspot-gc-use
mailing list