Tracking size of the object that caused the collection

Peter B. Kessler Peter.Kessler at Sun.COM
Tue Apr 7 00:43:06 UTC 2009


What Ramki said.  You can use -XX:+PrintHeapAtGC to see the remaining number of bytes in the eden space before each collection.  Part of Ramki's approximation is that the eden is allocated in thread-local allocation buffers, so the eden may look like it is full (of TLABs) when an allocation from a TLAB fails.

You can see the distribution of the sizes of the objects in the heap at any given time with "jmap -histo".  That will show you the number of instances of each class and the total number of bytes occupied by those instances.  For objects with fixed sizes (e.g., not arrays), you can use those numbers to figure out the size of an instance of each class.  Then you can figure out your allocation distribution, and the probability that an allocation of any given class will cause a collection.

It could also be that your objects are so large that they don't fit in a TLAB (though TLAB's grow and shrink as needed, but only within limits), in which case you'll be doing slow-path allocation for your large objects.  That grabs a lock, but it's not a highly-contended lock, and it shouldn't be noticeably slower to allocate one large object than many small objects that occupy the same amount of memory.  Large objects take longer to initialize than small objects, but again, not longer than initializing many small objects in the same amount of memory.  It could be that your objects are large enough to be allocated directly in the old generation.  If those objects are short-lived, then you will be causing more old generation collections, which are not designed for short-lived objects and so will have more overhead than if the objects were allocated and collected from the young generation.  But you didn't actually say what problem you are trying to solve.

This sounds like a strange lamppost to be looking under.  Collections are triggered (usually) by generations being too full to satisfy an allocation request.  Then the cost of the collection should be thought of as amortized over all the allocations that filled the generation.  Of course larger objects have a greater chance of being the ones that don't fit when a generation is getting full.  But looking at just the object that pushes you over the edge is a biased sampling.  A large object that causes a collection is no more "at fault" than all the little objects that filled up the generation to the point where there's no room for the large object.  If your allocation pattern were different, such that the large object were allocated first and the little objects were allocated later, then you might never see the large object as "causing" a collection.  (This temporal argument is dubious, since allocation is a continuous process: you would be hard-pressed to order your allocatio
ns such that your large objects were allocated just after a collection.  But I hope you get the idea.)

			... peter

Alex Aisinzon wrote:
> Hi all
> 
>  
> 
> We have historically seen performance degradation when large Java 
> Objects were allocated.
> 
> We came to this conclusion after reviewing some logs provided with 
> another JVM that trace the size of the object that triggered the collection.
> 
> Is there a flag that can be set to allow tracing the size of the object 
> whose allocation triggered the garbage collection?
> 
> Our current target would be Sun JDK 1.5.
> 
>  
> 
> Thanks in advance
> 
>  
> 
> Alex Aisinzon
> 
>  
> 
>  
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use

_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use



More information about the hotspot-gc-dev mailing list