Really short GC/application time cycle.

James Nichols jamesnichols3 at gmail.com
Fri Feb 20 10:00:06 PST 2009


I was looking through a recent GC log and saw a recurring pattern in the log
output that looked like what is shown below.  The application was stopped
for a very short period of time and the GC's were very short.  What is
perplexing is that my combination of JVM arguments should cause the heap
before/after snapshot to occur when garbage collection is occuring.  I'm not
sure what the JVM is doing that causes the application/gc to cycle this
rapidly and why there are no deatils being printed.  Is this a bug in the
log output or am I missing a setting?


Total time for which application threads were stopped: 0.1968050 seconds
Application time: 5.0923120 seconds
Total time for which application threads were stopped: 0.0223190 seconds
Application time: 0.0031100 seconds
Total time for which application threads were stopped: 0.0096730 seconds
Application time: 0.0006320 seconds
Total time for which application threads were stopped: 0.0135390 seconds
Application time: 0.0001600 seconds
Total time for which application threads were stopped: 0.0420840 seconds
Application time: 0.0009400 seconds
Total time for which application threads were stopped: 0.0222590 seconds
Application time: 0.0008400 seconds



My JVM arguments are:
-server -Xms4096m -Xmx4096m -XX:NewSize=1228M -XX:MaxNewSize=1228M
-XX:MaxTenuringThreshold=4 -XX:SurvivorRatio=6 -XX:+ScavengeBeforeFullGC
-XX:PermSize=256M -XX:MaxPermSize=256M -XX:+HandlePromotionFailure
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ParallelGCThreads=3
-XX:+CMSParallelRemarkEnabled -XX:+CMSClassUnloadingEnabled
-XX:+CMSPermGenSweepingEnabled -verbosegc -XX:+DisableExplicitGC
-XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-XX:+PrintHeapAtGC -XX:+PrintGCApplicationConcurrentTime
-XX:+PrintGCApplicationStoppedTime -XX:+PrintClassHistogram
-Xloggc:/var/log/jboss/gc.dat



These configuration settings cause the normal heap before/after snapshot to
show up at other times in the log, show below:

5043.055: [GC {Heap before gc invocations=145:
 par new generation   total 1100288K, used 984350K [0x00002aaab21b0000,
0x00002aaafedb0000, 0x00002aaafedb0000)
  eden space 943104K, 100% used [0x00002aaab21b0000, 0x00002aaaebab0000,
0x00002aaaebab0000)
  from space 157184K,  26% used [0x00002aaaf5430000, 0x00002aaaf7c77a60,
0x00002aaafedb0000)
  to   space 157184K,   0% used [0x00002aaaebab0000, 0x00002aaaebab0000,
0x00002aaaf5430000)
 concurrent mark-sweep generation total 2936832K, used 1051213K
[0x00002aaafedb0000, 0x00002aabb21b0000, 0x00002aabb21b0000)
 concurrent-mark-sweep perm gen total 262144K, used 136182K
[0x00002aabb21b0000, 0x00002aabc21b0000, 0x00002aabc21b0000)
5043.056: [ParNew
Desired survivor size 80478208 bytes, new threshold 4 (max 4)
- age   1:   29564048 bytes,   29564048 total
- age   2:    1292208 bytes,   30856256 total
- age   3:    8619864 bytes,   39476120 total
- age   4:     818880 bytes,   40295000 total
: 984350K->39396K(1100288K), 0.1633940 secs]
2035563K->1090727K(4037120K)Heap after gc invocations=146:
 par new generation   total 1100288K, used 39396K [0x00002aaab21b0000,
0x00002aaafedb0000, 0x00002aaafedb0000)
  eden space 943104K,   0% used [0x00002aaab21b0000, 0x00002aaab21b0000,
0x00002aaaebab0000)
  from space 157184K,  25% used [0x00002aaaebab0000, 0x00002aaaee1290e8,
0x00002aaaf5430000)
  to   space 157184K,   0% used [0x00002aaaf5430000, 0x00002aaaf5430000,
0x00002aaafedb0000)
 concurrent mark-sweep generation total 2936832K, used 1051331K
[0x00002aaafedb0000, 0x00002aabb21b0000, 0x00002aabb21b0000)
 concurrent-mark-sweep perm gen total 262144K, used 136182K
[0x00002aabb21b0000, 0x00002aabc21b0000, 0x00002aabc21b0000)
}
, 0.1644490 secs]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090220/c1159405/attachment.html 


More information about the hotspot-gc-use mailing list