Really short GC/application time cycle.

Y Srinivas Ramakrishna Y.S.Ramakrishna at Sun.COM
Sat Feb 21 01:00:25 UTC 2009


Hmm, I believe Jim is seeing these messages on 5u12 where
biased locking is not enabled by default as far as i can
tell (and he's not explicitly enabled it at least from
the set of JVM options he listed below). So the pauses might be for some other reason, perhaps deoptimization
because of class loading etc. maybe? Unfortunately, 5u12
does not support the +PrintSafepointStatistics flag that
would have been useful to get more visibility into what was going on.

--ramki

----- Original Message -----
From: Rainer Jung <rainer.jung at kippdata.de>
Date: Friday, February 20, 2009 3:41 pm
Subject: Re: Really short GC/application time cycle.
To: Y Srinivas Ramakrishna <Y.S.Ramakrishna at Sun.COM>
Cc: James Nichols <jamesnichols3 at gmail.com>, "hotspot-gc-use at openjdk.java.net" <hotspot-gc-use at openjdk.java.net>


> I expect the answer is the same I got from Tony Printezis in October 
> last year. See
> 
> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2008-October/000478.html
> 
> http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2008-October/000532.html
> 
> Regards,
> 
> Rainer
> 
> On 20.02.2009 19:02, Y Srinivas Ramakrishna wrote:
> >Try -XX:+PrintSafepointStatistics to get some more detail
> >on what might be causing these frequent safepoints.
> >
> >-- ramki
> >
> >----- Original Message -----
> >From: James Nichols<jamesnichols3 at gmail.com>
> >Date: Friday, February 20, 2009 10:00 am
> >Subject: Really short GC/application time cycle.
> >To: "hotspot-gc-use at openjdk.java.net"<hotspot-gc-use at openjdk.java.net>
> >
> >
> >>I was looking through a recent GC log and saw a recurring pattern in
> >>the log
> >>output that looked like what is shown below.  The application was stopped
> >>for a very short period of time and the GC's were very short.  What 
> is
> >>perplexing is that my combination of JVM arguments should cause the 
> heap
> >>before/after snapshot to occur when garbage collection is occuring.
> >>I'm not
> >>sure what the JVM is doing that causes the application/gc to cycle this
> >>rapidly and why there are no deatils being printed.  Is this a bug in
> >>the
> >>log output or am I missing a setting?
> >>
> >>
> >>Total time for which application threads were stopped: 0.1968050 seconds
> >>Application time: 5.0923120 seconds
> >>Total time for which application threads were stopped: 0.0223190 seconds
> >>Application time: 0.0031100 seconds
> >>Total time for which application threads were stopped: 0.0096730 seconds
> >>Application time: 0.0006320 seconds
> >>Total time for which application threads were stopped: 0.0135390 seconds
> >>Application time: 0.0001600 seconds
> >>Total time for which application threads were stopped: 0.0420840 seconds
> >>Application time: 0.0009400 seconds
> >>Total time for which application threads were stopped: 0.0222590 seconds
> >>Application time: 0.0008400 seconds
> >>
> >>
> >>
> >>My JVM arguments are:
> >>-server -Xms4096m -Xmx4096m -XX:NewSize=1228M -XX:MaxNewSize=1228M
> >>-XX:MaxTenuringThreshold=4 -XX:SurvivorRatio=6 -XX:+ScavengeBeforeFullGC
> >>-XX:PermSize=256M -XX:MaxPermSize=256M -XX:+HandlePromotionFailure
> >>-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ParallelGCThreads=3
> >>-XX:+CMSParallelRemarkEnabled -XX:+CMSClassUnloadingEnabled
> >>-XX:+CMSPermGenSweepingEnabled -verbosegc -XX:+DisableExplicitGC
> >>-XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
> >>-XX:+PrintHeapAtGC -XX:+PrintGCApplicationConcurrentTime
> >>-XX:+PrintGCApplicationStoppedTime -XX:+PrintClassHistogram
> >>-Xloggc:/var/log/jboss/gc.dat
> >>
> >>
> >>
> >>These configuration settings cause the normal heap before/after
> >>snapshot to
> >>show up at other times in the log, show below:
> >>
> >>5043.055: [GC {Heap before gc invocations=145:
> >>  par new generation   total 1100288K, used 984350K [0x00002aaab21b0000,
> >>0x00002aaafedb0000, 0x00002aaafedb0000)
> >>   eden space 943104K, 100% used [0x00002aaab21b0000, 0x00002aaaebab0000,
> >>0x00002aaaebab0000)
> >>   from space 157184K,  26% used [0x00002aaaf5430000, 0x00002aaaf7c77a60,
> >>0x00002aaafedb0000)
> >>   to   space 157184K,   0% used [0x00002aaaebab0000, 0x00002aaaebab0000,
> >>0x00002aaaf5430000)
> >>  concurrent mark-sweep generation total 2936832K, used 1051213K
> >>[0x00002aaafedb0000, 0x00002aabb21b0000, 0x00002aabb21b0000)
> >>  concurrent-mark-sweep perm gen total 262144K, used 136182K
> >>[0x00002aabb21b0000, 0x00002aabc21b0000, 0x00002aabc21b0000)
> >>5043.056: [ParNew
> >>Desired survivor size 80478208 bytes, new threshold 4 (max 4)
> >>- age   1:   29564048 bytes,   29564048 total
> >>- age   2:    1292208 bytes,   30856256 total
> >>- age   3:    8619864 bytes,   39476120 total
> >>- age   4:     818880 bytes,   40295000 total
> >>: 984350K->39396K(1100288K), 0.1633940 secs]
> >>2035563K->1090727K(4037120K)Heap after gc invocations=146:
> >>  par new generation   total 1100288K, used 39396K [0x00002aaab21b0000,
> >>0x00002aaafedb0000, 0x00002aaafedb0000)
> >>   eden space 943104K,   0% used [0x00002aaab21b0000, 0x00002aaab21b0000,
> >>0x00002aaaebab0000)
> >>   from space 157184K,  25% used [0x00002aaaebab0000, 0x00002aaaee1290e8,
> >>0x00002aaaf5430000)
> >>   to   space 157184K,   0% used [0x00002aaaf5430000, 0x00002aaaf5430000,
> >>0x00002aaafedb0000)
> >>  concurrent mark-sweep generation total 2936832K, used 1051331K
> >>[0x00002aaafedb0000, 0x00002aabb21b0000, 0x00002aabb21b0000)
> >>  concurrent-mark-sweep perm gen total 262144K, used 136182K
> >>[0x00002aabb21b0000, 0x00002aabc21b0000, 0x00002aabc21b0000)
> >>}
> >>, 0.1644490 secs]
_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use



More information about the hotspot-gc-dev mailing list