CMS olg gen growth
Jon Masamitsu
jon.masamitsu at oracle.com
Fri Feb 28 14:59:00 PST 2014
Ramki reminded me that the increase was happening after
the minor collect completed so my comments don't
apply.
Jon
On 2/28/2014 12:21 PM, Jon Masamitsu wrote:
> Gustav,
>
> During a young GC if CMS cannot get enough space for the
> promotion of an object and there is sufficient room to expand
> the old gen, it will expand it. The alternative is a promotion
> failure and CMS will expand the old gen in preference to a
> promotion failure. 196k could be right if it is a large object
> that needs promotion. The expansion will be at least 128k
> (somewhat larger on a 64bit system).
>
> This type of expansion says that there isn't sufficient space
> available in the old gen for the promotion. Since the CMS
> gen allocation is a freelist allocator, the slowness of the
> GC could be due to more searching in the freelists. That's
> just a guess.
>
> You can turn on -XX:PrintFLSCensus=1 or maybe
> -XX:PrintFLSStatistics=1 or 2 (2 gives more output).
> You might be able to tell if the freelist dictionary is
> full of chunks (and lots of searching is being done).
>
> Jon
>
>
> On 2/27/2014 11:11 PM, Gustav Åkesson wrote:
>> Hi,
>> When setting Xms != Xmx (yes, I know this is a bad idea...) I've seen
>> a peculiar behavior with a constant increase of old generation
>> capacity prior to CMS but without a FullGC event. These are the
>> settings which the application is running on:
>> -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
>> -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime
>> -XX:+PrintTenuringDistribution
>> -Xloggc:/tmp/gc_logs.txt -XX:+AggressiveOpts -XX:+UseBiasedLocking
>> -XX:CompileThreshold=5000 -XX:+UseConcMarkSweepGC
>> -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC -Xss2048k
>> -Xmx1024M -Xms512M -Xmn160M
>> Below we have two examples of initiating CMS cycles. Prior to CMS
>> running the heap is in total 507904K (first example) but suddenly
>> after the initial mark the capacity increased by 192K. This pattern
>> is pretty consistent throughout the execution.
>> 2014-02-27T17:54:33.554+0100: 3209.911:
>> [GC2014-02-27T17:54:33.554+0100: 3209.912: [ParNew
>> Desired survivor size 8388608 bytes, new threshold 6 (max 6)
>> - age 1: 1531520 bytes, 1531520 total
>> - age 3: 160 bytes, 1531680 total
>> : 132721K->2333K(147456K), 0.0123744 secs]
>> 289498K->159110K(*507904K*), 0.0129755 secs] [Times: user=0.13
>> sys=0.01, real=0.01 secs]
>> 2014-02-27T17:54:33.567+0100: 3209.925: Total time for which
>> application threads were stopped: 0.0272471 seconds
>> 2014-02-27T17:54:33.706+0100: 3210.064: Total time for which
>> application threads were stopped: 0.0133765 seconds
>> 2014-02-27T17:54:33.726+0100: 3210.084: Total time for which
>> application threads were stopped: 0.0197027 seconds
>> 2014-02-27T17:54:33.739+0100: 3210.097: [GC [1 CMS-initial-mark:
>> 156777K(*_360640K_*)] 290183K(*508096K*), 1.8524057 secs] [Times:
>> user=1.85 sys=0.00, real=1.85 secs]
>> 2014-02-27T19:07:07.828+0100: 7564.088:
>> [GC2014-02-27T19:07:07.828+0100: 7564.089: [ParNew
>> Desired survivor size 8388608 bytes, new threshold 6 (max 6)
>> - age 1: 1705520 bytes, 1705520 total
>> - age 4: 32 bytes, 1705552 total
>> - age 5: 64 bytes, 1705616 total
>> - age 6: 32 bytes, 1705648 total
>> : 132729K->2201K(147456K), 0.0154973 secs]
>> 289657K->159130K(*508096K*), 0.0161130 secs] [Times: user=0.14
>> sys=0.00, real=0.02 secs]
>> 2014-02-27T19:07:07.845+0100: 7564.105: Total time for which
>> application threads were stopped: 0.0318814 seconds
>> 2014-02-27T19:07:08.005+0100: 7564.265: Total time for which
>> application threads were stopped: 0.0153855 seconds
>> 2014-02-27T19:07:08.027+0100: 7564.287: Total time for which
>> application threads were stopped: 0.0217859 seconds
>> 2014-02-27T19:07:08.049+0100: 7564.309: Total time for which
>> application threads were stopped: 0.0218527 seconds
>> 2014-02-27T19:07:08.063+0100: 7564.324: [GC [1 CMS-initial-mark:
>> 156929K(*_361024K_*)] 290203K(*508480K*), 1.8475537 secs] [Times:
>> user=1.85 sys=0.00, real=1.85 secs]
>> The question is why does the heap grow like this? I was under the
>> impression that CMS only increased the capacity using a FullGC event,
>> and by then increased more than a few hundred kilobytes. What I've
>> also experienced is that the when the heap is NOT increased, then the
>> pause is considerably lower (as shown below). Is it possible that
>> this minor heap growth is adding adding to the pause of the initial mark?
>>
>> 2014-02-28T07:32:21.878+0100: 52277.150:
>> [GC2014-02-28T07:32:21.878+0100: 52277.151: [ParNew
>> Desired survivor size 8388608 bytes, new threshold 6 (max 6)
>> - age 1: 1021256 bytes, 1021256 total
>> - age 2: 32 bytes, 1021288 total
>> - age 5: 32 bytes, 1021320 total
>> : 132007K->1234K(147456K), 0.0123908 secs]
>> 284921K->154148K(*510400K*), 0.0129916 secs] [Times: user=0.13
>> sys=0.01, real=0.01 secs]
>> 2014-02-28T07:32:21.891+0100: 52277.164: Total time for which
>> application threads were stopped: 0.0279730 seconds
>> 2014-02-28T07:32:21.906+0100: 52277.179: [GC [1 CMS-initial-mark:
>> 152913K(362944K)] 155041K(*510400K*), 0.0365786 secs] [Times:
>> user=0.04 sys=0.00, real=0.04 secs]
>>
>> Best Regards,
>> Gustav Åkesson
>>
>>
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net
>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140228/f8ef6560/attachment.html
More information about the hotspot-gc-use
mailing list