Rapid fluctuation of tenuring threshold

James Nichols jamesnichols3 at gmail.com
Thu Jan 10 23:54:50 UTC 2008


Hello,

My application has a fairly high streaming dataflow that creates a lot of
short-lived objects.  I've made the young generation fairly large and have a
pretty big survivor space, but I still have a # of objects that end up in
the old generation.  As a result of this, my old generation size ramps up
slowly over time until these objects are dead, then they all get cleaned up
and there is a big drop in old generation usage, from about 2.5GBs to about
1GB.  Most of the time it takes about 80 minutes to get up to 2.5GBs, but
occasionally it happens much more rapidly, as fast at every 8 minutes.  My
workload does fluctuate over time, but not enough to explain this change in
garbage collection behavior.  I'm seeing some very odd behavior in the JVM's
tuning of the tenuring threshold.

Attached is a chart that plots the old generation over time (the blue line
with a moving average in yellow).  Each dot is a garbage collection (usually
the young generation).  On this chart, I also plotted the tenuring threshold
in red, with a moving average in black.  Each dot represents what the
threshold was for that particular garbage collection.  You can see around
time 11,500 minutes the peaks/valleys become much more frequent as more data
is ending up int the old generation.  During this time, the moving average
of the tenuring threshold drops substantially.  This also happens during
other periods where the old generation is filled up quickly.   I'm not sure
if this is a cause or an effect.  I leaning towards the tenuring threshold
being the cause because the objects ending up in the old generation are
typically reclaimed in a very short period of time.

I've analyzed the distribution of the tenuring threshold over a substantial
period of time in my application.  It appears that 25% of the time the
tenuring threshold is 1, 60% of the time it's 16, and the other 15% is
distributed somewhere in the middle.  I'm a bit puzzled why the threshold
would so rapidly change between 1 and 16, and wonder if I'm getting hosed by
the auto-tuning of this threshold.  Looking at the log, it rapidly goes from
1 to 16, very rarely stopping intermediately.  Is it reasonable to think
that something is wrong with the auto tuning to make it stay at 1 and cause
the behavior I noted above?

I'm running jdk 1.5.0_12 on RedHat linux and my application server is JBoss
4.0.5.  My GC settings are below... I can send the gc.dat log if you want
it, it's pretty big so I won't email it to the list.  I've signed up to the
mailing list, but please CC me on any replies.

Thanks, James

-server -Xms5170m -Xmx5170m
-XX:NewSize=1536M -XX:MaxNewSize=1536M
 -XX:PermSize=512M -XX:MaxPermSize=512M
-XX:MaxTenuringThreshold=30 -XX:SurvivorRatio=10
-XX:+ScavengeBeforeFullGC
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC -XX:ParallelGCThreads=3
-XX:+CMSParallelRemarkEnabled -XX:+CMSClassUnloadingEnabled
-XX:+CMSPermGenSweepingEnabled -XX:+DisableExplicitGC
-XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-XX:+PrintHeapAtGC
-XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime
-Xloggc:gc.dat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20080110/1a9501cb/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: tenuring.png
Type: image/png
Size: 43555 bytes
Desc: not available
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20080110/1a9501cb/tenuring.png>
-------------- next part --------------
_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use


More information about the hotspot-gc-dev mailing list