Very long young gc pause (ParNew with CMS)

Kirk Pepperdine kirk at kodewerk.com
Mon Jan 9 19:06:26 UTC 2012


Hi Ramki,

AFAICT given the limited GC log, the calculated tenuring threshold is always 1 which mean's he always flooding survivor spaces (i.e. suffering from premature promotion). My guess is that the tuning strategy assumes cost of long lived objects dominates and so heap is configured to minimize (survivor) copy costs. But it would appear that this strategy has backfired. Look at young gen size and if you do the maths you can see that there is no chance of there not being premature promotion. WIth the 80% initiating occupancy fraction.. well, that can't lead to anything good either. WIth the VM so misconfigured it's difficult to estimate true live set size which could then be used to calculate more reasonable pool sizes.

So, with all the promtion going on, I suspect that fragmentation is making it difficult to reallocate the object in tenuring... hence long pause time. Would you say with these large data strictures that it might be difficult for the CMS to parallelize the scan for roots? The abortable pre-clean aborts on time which means that it's not able to clear out much and given the apparent life-cycle, is it worth running this phase? In fact, would you not guess that the parallel collector do better in this scenario?

-- Kirk

ps. I'm always happy beat you to the punch.. 'cos it's very difficult to do. ;-)

On 2012-01-09, at 7:40 PM, Srinivas Ramakrishna wrote:

> Haven't looked at any logs, but setting MaxTenuringThreshold to 31 can be bad. I'd dial that down to 8,
> or leave it at the default of 15. (Your GC logs which must presumably include the tenuring distribution should
> inform you as to a more optimal size to use. As Kirk noted, premature promotion can be bad, and so can
> survivor space overflow, which can lead to premature promotion and exacerbate fragmentation.)
> 
> -- ramki
> 
> On Mon, Jan 9, 2012 at 3:08 AM, Florian Binder <java at java4.info> wrote:
> Hi everybody,
> 
> I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
> gc pauses.
> As you can see in the log below the old-gen-heap consists of one large
> block, the new Size has 256m, it uses 13 worker threads and it has to
> copy 27505761 words (~210mb) directly from eden to old gen.
> I have seen that this problem occurs only after about one week of
> uptime. Even thought we make a full (compacting) gc every night.
> Since real-time > user-time I assume it might be a synchronization
> problem. Can this be true?
> 
> Do you have any Ideas how I can speed up this gcs?
> 
> Please let me know, if you need more informations.
> 
> Thank you,
> Flo
> 
> 
> ##### java -version #####
> java version "1.6.0_29"
> Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)
> 
> ##### The startup parameters: #####
> -Xms28G -Xmx28G
> -XX:+UseConcMarkSweepGC \
> -XX:CMSMaxAbortablePrecleanTime=10000 \
> -XX:SurvivorRatio=8 \
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=31 \
> -XX:CMSInitiatingOccupancyFraction=80 \
> -XX:NewSize=256M \
> 
> -verbose:gc \
> -XX:+PrintFlagsFinal \
> -XX:PrintFLSStatistics=1 \
> -XX:+PrintGCDetails \
> -XX:+PrintGCDateStamps \
> -XX:-TraceClassUnloading \
> -XX:+PrintGCApplicationConcurrentTime \
> -XX:+PrintGCApplicationStoppedTime \
> -XX:+PrintTenuringDistribution \
> -XX:+CMSClassUnloadingEnabled \
> -Dsun.rmi.dgc.server.gcInterval=9223372036854775807 \
> -Dsun.rmi.dgc.client.gcInterval=9223372036854775807 \
> 
> -Djava.awt.headless=true
> 
> ##### From the out-file (as of +PrintFlagsFinal): #####
> ParallelGCThreads                         = 13
> 
> ##### The gc.log-excerpt: #####
> Application time: 20,0617700 seconds
> 2011-12-22T12:02:03.289+0100: [GC Before GC:
> Statistics for BinaryTreeDictionary:
> ------------------------------------
> Total Free Space: 1183290963
> Max   Chunk Size: 1183290963
> Number of Blocks: 1
> Av.  Block  Size: 1183290963
> Tree      Height: 1
> Before GC:
> Statistics for BinaryTreeDictionary:
> ------------------------------------
> Total Free Space: 0
> Max   Chunk Size: 0
> Number of Blocks: 0
> Tree      Height: 0
> [ParNew
> Desired survivor size 25480392 bytes, new threshold 1 (max 31)
> - age   1:   28260160 bytes,   28260160 total
> : 249216K->27648K(249216K), 6,1808130 secs]
> 20061765K->20056210K(29332480K)After GC:
> Statistics for BinaryTreeDictionary:
> ------------------------------------
> Total Free Space: 1155785202
> Max   Chunk Size: 1155785202
> Number of Blocks: 1
> Av.  Block  Size: 1155785202
> Tree      Height: 1
> After GC:
> Statistics for BinaryTreeDictionary:
> ------------------------------------
> Total Free Space: 0
> Max   Chunk Size: 0
> Number of Blocks: 0
> Tree      Height: 0
> , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
> Total time for which application threads were stopped: 6,1818730 seconds
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> 
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20120109/08d12ac9/attachment.htm>
-------------- next part --------------
_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use


More information about the hotspot-gc-dev mailing list