the amazing tales of the search for the invisible man! or, where's my gc root
kirk
kirk at kodewerk.com
Fri Apr 17 19:07:12 UTC 2009
forgot to attach.
Regards,
Kirk
Tony Printezis wrote:
> Well, I was going to pointer the finger towards the class loaders...
> but Kirk here did a much more thorough analysis! :-)
>
> Tony
>
> kirk wrote:
>> Hi Jed,
>>
>> I've had a quick look at the heap dump. I'm having a little trouble
>> understanding what is in there. What I can see is a large number of
>> java.lang.reflect.Method objects being held. There seems to be two
>> competing patterns of references holding onto these objects. I've
>> attached some screenshots rather than use words.
>>
>> The scary thing is that the references include ClassLoader.scl,
>> JDK12Hooks.systemClassLoader as well as Apache commons logging
>> LogFactory. With this type of the complex entanglement it would seem
>> unlikely that these objects would ever be collected. The other
>> pattern also includes the spiders web of references. It also includes
>> UberspecImpl and a whole bunch of static collections. IME, static
>> collections are involved in the vast majority of leaks I've diagnosed.
>>
>> Interestingly enough the a portion of the 2nd largest consumer of
>> memory is also tangled up in the JDK12Hooks. Random sampling leads me
>> to AST parse trees and "no reference". Looks like much of this is
>> tied up with Velocity. In fact the largest consumer of memory at 24%
>> is char[]. I'm failing to find anything that is not tied up with
>> Velocity (AST parsing).
>>
>> Needs more investigation. Be interesting to run a test with
>> generations turned on. NetBeans generations is a true count unlike
>> that provided by YourKit.
>>
>> Regards,
>> Kirk
>>
>>
>> Jed Wesley-Smith wrote:
>>> Classes as well. We end up getting an OOME although the profilers
>>> report only a third of the heap is reachable.
>>>
>>> Although I indicated we saw this on the IBM jdk analysis of that
>>> dump showed a completely different issue that apparently may not be
>>> a problem (due to reflection optimisation on that jdk) - the dead
>>> objects appear to have been correctly cleared. We are reproducing
>>> this to verify.
>>>
>>> Additionally we tried running with -client on the sun jvms as we saw
>>> a bug that might have caused it reported against server only but
>>> without success.
>>>
>>> cheers,
>>> jed.
>>>
>>> On 16/04/2009, at 12:51 AM, Tony Printezis
>>> <Antonios.Printezis at sun.com> wrote:
>>>
>>>> OK, I'll bite.
>>>>
>>>> When you say: "a large section of memory (a plugin framework)" do
>>>> you mean only objects in the young / old gen, or also classes in
>>>> the perm gen?
>>>>
>>>> How do you know that said memory is not being reclaimed? Do you
>>>> eventually get an OOM?
>>>>
>>>> Given that it happens with two different JVMs (I assume you use
>>>> HotSpot on Linux and Mac, as well as the IBM JDK), it's unlikely to
>>>> be a GC bug, as both JVMs would need to have the same bug. Not
>>>> impossible, but unlikely, IMHO.
>>>>
>>>> Tony
>>>>
>>>> Jed Wesley-Smith wrote:
>>>>> all,
>>>>>
>>>>> I am writing to this list in some desperation hoping for some
>>>>> expert advice. We (the JIRA development team at Atlassian) have
>>>>> been hunting memory leaks for some weeks and in the process have
>>>>> tracked down and removed every possible reference to a large
>>>>> section of memory (a plugin framework) that we could find.
>>>>> Starting with all strong references and proceeding to remove soft
>>>>> and weak references - even things like clearing the
>>>>> java.lang.reflect.Proxy cache - and even Finalizer references
>>>>> until both YourKit, Eclipse MAT, JProfiler and jhat all report
>>>>> that the memory in question is dead and should be collectable, but
>>>>> inexplicably _the JVM still holds on to it_. There are no JNI
>>>>> Global references either, yet this memory remains uncollectable!
>>>>>
>>>>> This happens for the 1.5 and 1.6 JVMs on Linux and Mac, and the
>>>>> IBM 1.6 JDK on Linux.
>>>>>
>>>>> So my question is, how on earth do I search for what is
>>>>> referencing this uncollectable memory? Are there any other tools
>>>>> that can help find why this memory is not collected? Can I query
>>>>> the VM directly somehow?
>>>>>
>>>>> I fear this is a JVM GC bug as no known memory analysis tool can
>>>>> find the heap root (i.e. according to "the rules" there is no heap
>>>>> root). Are there any known GC memory leaks caused by ClassLoaders
>>>>> being dropped for instance?
>>>>>
>>>>> The application is creating and disposing of a lot of ClassLoaders
>>>>> via OSGi (Apache Felix) with Spring OSGi. It creates a lot of
>>>>> java.lang.reflect.Proxy class instances.
>>>>>
>>>>> We have written this up and added an example heap dump here:
>>>>> http://jira.atlassian.com/browse/JRA-16932
>>>>>
>>>>> Having come to the end of our tethers here, if anyone can help in
>>>>> any way it would be massively appreciated.
>>>>>
>>>>> cheers,
>>>>> Jed Wesley-Smith
>>>>> JIRA Team @ Atlassian
>>>>
>>>> --
>>>> ---------------------------------------------------------------------
>>>> | Tony Printezis, Staff Engineer | Sun Microsystems Inc. |
>>>> | | MS UBUR02-311 |
>>>> | e-mail: tony.printezis at sun.com | 35 Network Drive |
>>>> | office: +1 781 442 0998 (x20998) | Burlington, MA 01803-2756, USA |
>>>> ---------------------------------------------------------------------
>>>> e-mail client: Thunderbird (Linux)
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Picture 3(2).png
Type: image/png
Size: 239027 bytes
Desc: not available
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20090417/65361260/Picture32.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Picture 4(2).png
Type: image/png
Size: 216843 bytes
Desc: not available
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20090417/65361260/Picture42.png>
More information about the hotspot-gc-dev
mailing list