CMS olg gen growth
Srinivas Ramakrishna
ysr1729 at gmail.com
Fri Feb 28 12:17:48 PST 2014
I know there were issues of creeping heap growth because of a mismatch of
heuristics for starting a CMS cycle vs the heuristics for deciding how much
free space there should be in the old gen following the end of a CMS cycle.
I cna't recall the details, but this has been an issue with CMS from day
one, but we never really fixed it. There's a comment in the resize code or
an open bug to fix this behaviour.
Yes, CMS does resize the heap (all the way up to max) in response to what
it believes the heap size should be based on how much space is free
following a CMS concurrent collection, without the need for a full gc.
I am not sure though about what is causing the heap resize to happen before
the CMS initial mark. (I would expect it to happen either at a minor gc or
at an allocation event or at the end of a CMS sweep phase.) My guess is
that a minor gc or an allocation event needed contiguous space for a larger
object which could not be found, so it got a new larger chunk from the
unallocated portion of the old gen (but this is just a guess). You might
want to run with -XX:+PrintHeapAtGC which should tell you when the
expansion occurred. There may be other tracing flags that might provide
more clues, but I am a bit rusty at the moment.
-- ramki
On Thu, Feb 27, 2014 at 11:11 PM, Gustav Åkesson <gustav.r.akesson at gmail.com
> wrote:
> Hi,
>
> When setting Xms != Xmx (yes, I know this is a bad idea...) I've seen a
> peculiar behavior with a constant increase of old generation capacity prior
> to CMS but without a FullGC event. These are the settings which the
> application is running on:
>
> -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
> -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime
> -XX:+PrintTenuringDistribution -Xloggc:/tmp/gc_logs.txt -XX:+AggressiveOpts
> -XX:+UseBiasedLocking -XX:CompileThreshold=5000 -XX:+UseConcMarkSweepGC
> -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC -Xss2048k -Xmx1024M
> -Xms512M -Xmn160M
>
>
> Below we have two examples of initiating CMS cycles. Prior to CMS running
> the heap is in total 507904K (first example) but suddenly after the initial
> mark the capacity increased by 192K. This pattern is pretty consistent
> throughout the execution.
>
>
> 2014-02-27T17:54:33.554+0100: 3209.911: [GC2014-02-27T17:54:33.554+0100:
> 3209.912: [ParNew
> Desired survivor size 8388608 bytes, new threshold 6 (max 6)
> - age 1: 1531520 bytes, 1531520 total
> - age 3: 160 bytes, 1531680 total
> : 132721K->2333K(147456K), 0.0123744 secs] 289498K->159110K(*507904K*),
> 0.0129755 secs] [Times: user=0.13 sys=0.01, real=0.01 secs]
> 2014-02-27T17:54:33.567+0100: 3209.925: Total time for which application
> threads were stopped: 0.0272471 seconds
> 2014-02-27T17:54:33.706+0100: 3210.064: Total time for which application
> threads were stopped: 0.0133765 seconds
> 2014-02-27T17:54:33.726+0100: 3210.084: Total time for which application
> threads were stopped: 0.0197027 seconds
> 2014-02-27T17:54:33.739+0100: 3210.097: [GC [1 CMS-initial-mark: 156777K(
> *360640K*)] 290183K(*508096K*), 1.8524057 secs] [Times: user=1.85
> sys=0.00, real=1.85 secs]
>
> 2014-02-27T19:07:07.828+0100: 7564.088: [GC2014-02-27T19:07:07.828+0100:
> 7564.089: [ParNew
> Desired survivor size 8388608 bytes, new threshold 6 (max 6)
> - age 1: 1705520 bytes, 1705520 total
> - age 4: 32 bytes, 1705552 total
> - age 5: 64 bytes, 1705616 total
> - age 6: 32 bytes, 1705648 total
> : 132729K->2201K(147456K), 0.0154973 secs] 289657K->159130K(*508096K*),
> 0.0161130 secs] [Times: user=0.14 sys=0.00, real=0.02 secs]
> 2014-02-27T19:07:07.845+0100: 7564.105: Total time for which application
> threads were stopped: 0.0318814 seconds
> 2014-02-27T19:07:08.005+0100: 7564.265: Total time for which application
> threads were stopped: 0.0153855 seconds
> 2014-02-27T19:07:08.027+0100: 7564.287: Total time for which application
> threads were stopped: 0.0217859 seconds
> 2014-02-27T19:07:08.049+0100: 7564.309: Total time for which application
> threads were stopped: 0.0218527 seconds
> 2014-02-27T19:07:08.063+0100: 7564.324: [GC [1 CMS-initial-mark: 156929K(
> *361024K*)] 290203K(*508480K*), 1.8475537 secs] [Times: user=1.85
> sys=0.00, real=1.85 secs]
>
>
> The question is why does the heap grow like this? I was under the
> impression that CMS only increased the capacity using a FullGC event, and
> by then increased more than a few hundred kilobytes. What I've also
> experienced is that the when the heap is NOT increased, then the pause is
> considerably lower (as shown below). Is it possible that this minor heap
> growth is adding adding to the pause of the initial mark?
>
>
>
> 2014-02-28T07:32:21.878+0100: 52277.150: [GC2014-02-28T07:32:21.878+0100:
> 52277.151: [ParNew
> Desired survivor size 8388608 bytes, new threshold 6 (max 6)
> - age 1: 1021256 bytes, 1021256 total
> - age 2: 32 bytes, 1021288 total
> - age 5: 32 bytes, 1021320 total
> : 132007K->1234K(147456K), 0.0123908 secs] 284921K->154148K(*510400K*),
> 0.0129916 secs] [Times: user=0.13 sys=0.01, real=0.01 secs]
> 2014-02-28T07:32:21.891+0100: 52277.164: Total time for which application
> threads were stopped: 0.0279730 seconds
> 2014-02-28T07:32:21.906+0100: 52277.179: [GC [1 CMS-initial-mark:
> 152913K(362944K)] 155041K(*510400K*), 0.0365786 secs] [Times: user=0.04
> sys=0.00, real=0.04 secs]
>
>
>
> Best Regards,
>
> Gustav Åkesson
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140228/b35f5860/attachment.html
More information about the hotspot-gc-use
mailing list