Very long young gc pause (ParNew with CMS)

Florian Binder java at java4.info
Mon Jan 9 19:18:13 UTC 2012


Hi Ramki,

Yes, I am agreed with you. 31 is too large and I have removed the 
parameter (using default now). Nevertheless this is not the problem as 
the max used age was always 1.

Since the most (more than 90%) new allocated objects in our application 
live for a long time (>1h) we mostly will have premature promotion.
Is there a way to optimize this?

I have seen most time, when young gc needs much time (> 6 secs) there is 
only one large block in the old gen. If there has been a 
cms-old-gen-collection and there are more than one blocks in the old 
generation it is mostly (not always) much faster (needs less than 200ms).

Is it possible that premature promotion can not be done parallel if 
there is only one large block in the old gen?

In the past we have had a problem with fragmentation on this server but 
this is gone since we increased memory for it and triggered a full gc 
(compacting) every night, like Tony advised us. With setting the 
initiating occupancy fraction to 80% we have only a few (~10) old 
generation collections (which are very fast) and the heap fragmentation 
is low.

Flo


Am 09.01.2012 19:40, schrieb Srinivas Ramakrishna:
> Haven't looked at any logs, but setting MaxTenuringThreshold to 31 can 
> be bad. I'd dial that down to 8,
> or leave it at the default of 15. (Your GC logs which must presumably 
> include the tenuring distribution should
> inform you as to a more optimal size to use. As Kirk noted, premature 
> promotion can be bad, and so can
> survivor space overflow, which can lead to premature promotion and 
> exacerbate fragmentation.)
>
> -- ramki
>
> On Mon, Jan 9, 2012 at 3:08 AM, Florian Binder <java at java4.info 
> <mailto:java at java4.info>> wrote:
>
>     Hi everybody,
>
>     I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
>     gc pauses.
>     As you can see in the log below the old-gen-heap consists of one large
>     block, the new Size has 256m, it uses 13 worker threads and it has to
>     copy 27505761 words (~210mb) directly from eden to old gen.
>     I have seen that this problem occurs only after about one week of
>     uptime. Even thought we make a full (compacting) gc every night.
>     Since real-time > user-time I assume it might be a synchronization
>     problem. Can this be true?
>
>     Do you have any Ideas how I can speed up this gcs?
>
>     Please let me know, if you need more informations.
>
>     Thank you,
>     Flo
>
>
>     ##### java -version #####
>     java version "1.6.0_29"
>     Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
>     Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)
>
>     ##### The startup parameters: #####
>     -Xms28G -Xmx28G
>     -XX:+UseConcMarkSweepGC \
>     -XX:CMSMaxAbortablePrecleanTime=10000 \
>     -XX:SurvivorRatio=8 \
>     -XX:TargetSurvivorRatio=90 \
>     -XX:MaxTenuringThreshold=31 \
>     -XX:CMSInitiatingOccupancyFraction=80 \
>     -XX:NewSize=256M \
>
>     -verbose:gc \
>     -XX:+PrintFlagsFinal \
>     -XX:PrintFLSStatistics=1 \
>     -XX:+PrintGCDetails \
>     -XX:+PrintGCDateStamps \
>     -XX:-TraceClassUnloading \
>     -XX:+PrintGCApplicationConcurrentTime \
>     -XX:+PrintGCApplicationStoppedTime \
>     -XX:+PrintTenuringDistribution \
>     -XX:+CMSClassUnloadingEnabled \
>     -Dsun.rmi.dgc.server.gcInterval=9223372036854775807 \
>     -Dsun.rmi.dgc.client.gcInterval=9223372036854775807 \
>
>     -Djava.awt.headless=true
>
>     ##### From the out-file (as of +PrintFlagsFinal): #####
>     ParallelGCThreads                         = 13
>
>     ##### The gc.log-excerpt: #####
>     Application time: 20,0617700 seconds
>     2011-12-22T12:02:03.289+0100: [GC Before GC:
>     Statistics for BinaryTreeDictionary:
>     ------------------------------------
>     Total Free Space: 1183290963
>     Max   Chunk Size: 1183290963
>     Number of Blocks: 1
>     Av.  Block  Size: 1183290963
>     Tree      Height: 1
>     Before GC:
>     Statistics for BinaryTreeDictionary:
>     ------------------------------------
>     Total Free Space: 0
>     Max   Chunk Size: 0
>     Number of Blocks: 0
>     Tree      Height: 0
>     [ParNew
>     Desired survivor size 25480392 bytes, new threshold 1 (max 31)
>     - age   1:   28260160 bytes,   28260160 total
>     : 249216K->27648K(249216K), 6,1808130 secs]
>     20061765K->20056210K(29332480K)After GC:
>     Statistics for BinaryTreeDictionary:
>     ------------------------------------
>     Total Free Space: 1155785202
>     Max   Chunk Size: 1155785202
>     Number of Blocks: 1
>     Av.  Block  Size: 1155785202
>     Tree      Height: 1
>     After GC:
>     Statistics for BinaryTreeDictionary:
>     ------------------------------------
>     Total Free Space: 0
>     Max   Chunk Size: 0
>     Number of Blocks: 0
>     Tree      Height: 0
>     , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
>     Total time for which application threads were stopped: 6,1818730
>     seconds
>     _______________________________________________
>     hotspot-gc-use mailing list
>     hotspot-gc-use at openjdk.java.net
>     <mailto:hotspot-gc-use at openjdk.java.net>
>     http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20120109/a2997a2e/attachment.htm>
-------------- next part --------------
_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use


More information about the hotspot-gc-dev mailing list