Perplexing GC Time Growth
Jason Vasquez
jason at mugfu.com
Tue Jan 8 16:16:33 UTC 2008
Well, interesting finds yesterday --
You had me thinking about gcroots again -- I inspected the gc roots
via jhat after a 24 hour run of the application, and noticed some
surprising results. The jhat output was quite large and ended up
crashing the browser after it consumed >2gb RAM :) dumped it out via
curl, and stopped after the HTML output capped 350MB over the slow VPN
link. Appears with the limited output, there were >1.5million JNI
Local References hanging around. (and likely more if I had been able
to obtain the full output)
Looks like the count sits idle at about 5 local refs, and grows very
rapidly during normal running of the application (in tens of thousands
after a couple minutes) I peppered the native code with
DeleteLocalRef directives anywhere a local reference was declared, and
the number of JNI local refs now stays steady at 52-53 during full
load. (!) The JNI documentation seems to indicate that I shouldn't
need to do that for these type of references, but in any case, that is
an extreme difference.
This makes sense for the linear increase in GC times, since the number
of JNI Local Refs (and, by extension, gc roots) was likely growing at
a quite rapid and liner pace.
I've started a new run in our load test environment now, with high
hopes of success this time, I'll let you know how it turns out. I had
temporarily disabled the PermGen sweep at the beginning of the test,
and am seeing some very slow growth in the perm gen, I might still
need that flag back before we're all done, but this is a good start.
Thanks for your assistance - I'm hoping for the best,
-jaosn
On Jan 5, 2008, at 09:46 , Jason Vasquez wrote:
> Hi Ramki,
>
> Sorry for the delay, I'm on vacation visiting some family over a
> long weekend, trying not to draw the wrath of my wife for stealing
> time away hopping on the VPN to check things out :)
>
> Anyway, I've attached the latest garbage log, unfortunately, it
> doesn't seem to have changed things much (although I can see that
> the flags are now taking affect)
>
> -jason
>
> <garbage.log.gz>
>
>
>
> On Jan 4, 2008, at 14:54 , Y Srinivas Ramakrishna wrote:
>
>> Hi Jason -- did you get a chance to check what happens to scavenge
>> times with both the class unloading options specified as stated
>> below.
>>
>> thanks!
>> -- ramki
>>
>> ----- Original Message -----
>> From: Y.S.Ramakrishna at Sun.COM
>> Date: Thursday, January 3, 2008 4:10 pm
>> Subject: Re: Perplexing GC Time Growth
>> To: Jason Vasquez <jason at mugfu.com>
>> Cc: hotspot-gc-use at openjdk.java.net
>>
>>
>>> Hi Jason --
>>>
>>> Y.S.Ramakrishna at Sun.COM wrote:
>>> ...
>>>> While we think of potential reasons for this, or mull over
>>>> appropriate sensors that can lay bare the root cause here,
>>>> could you, on a quick hunch, do a quick experiment and tell
>>>> me if adding the options -XX:+CMSClassUnloadingEnabled makes
>>>> any difference to the observations above. [Does your application
>>>> do lots of class loading or reflection or string interning?]
>>>
>>> I think i screwed up a bit here. I was assuming you were using
>>> the (newest) OpenJDK build. As it happens, in 6u3 and older
>>> you need to specify both +CMSClassUnloadingEnabled _and_
>>> +CMSPermGenSweepingEnabled, else you do not get the desired effect.
>>> Sorry this was fixed after 6u3, so the additional flag did not
>>> do anything in your case.
>>>
>>> Let me know if the additional flag (please use both) made
>>> a difference, and apologies for the bad user interface here
>>> (set right as of 6u4 and in OpenJDK).
>>>
>>> -- ramki
>>>
>>>>
>>>
>
_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
More information about the hotspot-gc-dev
mailing list