CMS olg gen growth

Gustav Åkesson gustav.r.akesson at gmail.com
Thu Feb 27 23:11:00 PST 2014


Hi,

When setting Xms != Xmx (yes, I know this is a bad idea...) I've seen a
peculiar behavior with a constant increase of old generation capacity prior
to CMS but without a FullGC event. These are the settings which the
application is running on:

-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime
-XX:+PrintTenuringDistribution -Xloggc:/tmp/gc_logs.txt -XX:+AggressiveOpts
-XX:+UseBiasedLocking -XX:CompileThreshold=5000 -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC -Xss2048k -Xmx1024M
-Xms512M -Xmn160M


Below we have two examples of initiating CMS cycles. Prior to CMS running
the heap is in total 507904K (first example) but suddenly after the initial
mark the capacity increased by 192K. This pattern is pretty consistent
throughout the execution.


2014-02-27T17:54:33.554+0100: 3209.911: [GC2014-02-27T17:54:33.554+0100:
3209.912: [ParNew
Desired survivor size 8388608 bytes, new threshold 6 (max 6)
- age   1:    1531520 bytes,    1531520 total
- age   3:        160 bytes,    1531680 total
: 132721K->2333K(147456K), 0.0123744 secs] 289498K->159110K(*507904K*),
0.0129755 secs] [Times: user=0.13 sys=0.01, real=0.01 secs]
2014-02-27T17:54:33.567+0100: 3209.925: Total time for which application
threads were stopped: 0.0272471 seconds
2014-02-27T17:54:33.706+0100: 3210.064: Total time for which application
threads were stopped: 0.0133765 seconds
2014-02-27T17:54:33.726+0100: 3210.084: Total time for which application
threads were stopped: 0.0197027 seconds
2014-02-27T17:54:33.739+0100: 3210.097: [GC [1 CMS-initial-mark: 156777K(
*360640K*)] 290183K(*508096K*), 1.8524057 secs] [Times: user=1.85 sys=0.00,
real=1.85 secs]

2014-02-27T19:07:07.828+0100: 7564.088: [GC2014-02-27T19:07:07.828+0100:
7564.089: [ParNew
Desired survivor size 8388608 bytes, new threshold 6 (max 6)
- age   1:    1705520 bytes,    1705520 total
- age   4:         32 bytes,    1705552 total
- age   5:         64 bytes,    1705616 total
- age   6:         32 bytes,    1705648 total
: 132729K->2201K(147456K), 0.0154973 secs] 289657K->159130K(*508096K*),
0.0161130 secs] [Times: user=0.14 sys=0.00, real=0.02 secs]
2014-02-27T19:07:07.845+0100: 7564.105: Total time for which application
threads were stopped: 0.0318814 seconds
2014-02-27T19:07:08.005+0100: 7564.265: Total time for which application
threads were stopped: 0.0153855 seconds
2014-02-27T19:07:08.027+0100: 7564.287: Total time for which application
threads were stopped: 0.0217859 seconds
2014-02-27T19:07:08.049+0100: 7564.309: Total time for which application
threads were stopped: 0.0218527 seconds
2014-02-27T19:07:08.063+0100: 7564.324: [GC [1 CMS-initial-mark: 156929K(
*361024K*)] 290203K(*508480K*), 1.8475537 secs] [Times: user=1.85 sys=0.00,
real=1.85 secs]


The question is why does the heap grow like this? I was under the
impression that CMS only increased the capacity using a FullGC event, and
by then increased more than a few hundred kilobytes. What I've also
experienced is that the when the heap is NOT increased, then the pause is
considerably lower (as shown below). Is it possible that this minor heap
growth is adding adding to the pause of the initial mark?



2014-02-28T07:32:21.878+0100: 52277.150: [GC2014-02-28T07:32:21.878+0100:
52277.151: [ParNew
Desired survivor size 8388608 bytes, new threshold 6 (max 6)
- age   1:    1021256 bytes,    1021256 total
- age   2:         32 bytes,    1021288 total
- age   5:         32 bytes,    1021320 total
: 132007K->1234K(147456K), 0.0123908 secs] 284921K->154148K(*510400K*),
0.0129916 secs] [Times: user=0.13 sys=0.01, real=0.01 secs]
2014-02-28T07:32:21.891+0100: 52277.164: Total time for which application
threads were stopped: 0.0279730 seconds
2014-02-28T07:32:21.906+0100: 52277.179: [GC [1 CMS-initial-mark:
152913K(362944K)] 155041K(*510400K*), 0.0365786 secs] [Times: user=0.04
sys=0.00, real=0.04 secs]



Best Regards,

Gustav Åkesson
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140228/c54fdc9f/attachment.html 


More information about the hotspot-gc-use mailing list