G1 discovers same garbage again?

Florian Binder java at java4.info
Mon Dec 12 21:55:14 UTC 2011


Hi Ramki,

thanks for your answere.
I have compared the gc-log with the jconsole chart and for every jump of 
the eden space to zero there is a (young) gc-entry in the log. So the 
jconsole chart is right and the scavenger period suddenly falls (even I 
am almost sure the application does not allocate more objects than before).
Thank you for the hint with the TLAB size. I will trying this now.

Flo

Am 12.12.2011 20:40, schrieb Srinivas Ramakrishna:
>
> As I said, instead of relying on jconsole's asynchronous heap size 
> samples, look at the GC log to see
> if the inter-scavenge period suddenly falls (i.e. the application is 
> presumably allocating at a much faster rate
> than is normal). If the GC log confirns that, then see if turning off  
> TLAB resizing (and fixing it to a reasonable
> value) makes any difference.
>
> Your existing GC logs should be sufficient to answer the first part of 
> the question above.
>
> best.
> -- ramki
>
> On Mon, Dec 12, 2011 at 11:08 AM, Florian Binder <java at java4.info 
> <mailto:java at java4.info>> wrote:
>
>     Tony,
>
>     As you have seen at the jconsole-screenshot the app does make very
>     few garbage. It is optimized to make no garbage. But suddenly
>     there seems to be every 90 seconds about 4 gb of (new) garbage
>     (even the load of the app does not change). Therefore I thought it
>     might be the g1-collector. I will try to reproduce this senario an
>     have a closure look on it.
>
>     Flo
>
>     Am 12.12.2011 15:46, schrieb Tony Printezis:
>>     Florian,
>>
>>     Not sure what you mean by "the same GC". It is because the space
>>     transition looks like: 16394M->12914M(20480M)? This is normal.
>>     Heap occupancy reaches 16394M, GC happens, heap occupancy goes
>>     down to 12914M. Next time it happens, the heap will have new
>>     objects. This is how GCs work.
>>
>>     Tony
>>
>>     On 12/09/2011 09:08 PM, Florian Binder wrote:
>>>     In the gc log it seems that there is always running the same
>>>     garbage collection.
>>>     For example:
>>>     $ zcat out_err.log.gz | grep 20480M | tail
>>>     2011-11-28T19:13:19.482+0100: [GC cleanup
>>>     14515M->14515M(20480M), 0.1370060 secs]
>>>        [ 16394M->12914M(20480M)]
>>>        [ 16394M->12914M(20480M)]
>>>     2011-11-28T19:17:12.509+0100: [GC cleanup
>>>     15582M->15582M(20480M), 0.1387230 secs]
>>>        [ 16394M->12914M(20480M)]
>>>        [ 16394M->12914M(20480M)]
>>>        [ 16394M->12914M(20480M)]
>>>     2011-11-28T19:21:06.089+0100: [GC cleanup
>>>     12978M->12978M(20480M), 0.1344170 secs]
>>>        [ 16394M->12914M(20480M)]
>>>        [ 16394M->12914M(20480M)]
>>>
>>>     Therefore I assume this might be a bug ;-)
>>>     You can download the whole log at:
>>>     http://java4.info/g1/out_err.log.gz
>>>
>>>     I don't think that we still have the logs of that very long gc,
>>>     but I will have a look for it at monday.
>>>     Furthermore I do not think that we have logged there much
>>>     details of the gc. But I know that this happend on a very
>>>     special server, which contains more the 30gb of references (yes,
>>>     just references to other objects). If we run it with
>>>     CompressedOops we reduce the memory useage to nearly 50%.
>>>
>>>     Flo
>>>
>>>
>>>
>>>     Am 10.12.2011 01:34, schrieb Srinivas Ramakrishna:
>>>>
>>>>
>>>>     A couple of things caught my eye....
>>>>
>>>>         On 12/05/2011 02:19 PM, Florian Binder wrote:
>>>>         >> I assume you know when the data will be dropped and
>>>>         reinitialized,
>>>>         >> right? Can you do a Full GC (with System.gc()) after you
>>>>         re-initialize
>>>>         >> the heap. This typically helps a lot with CMS.
>>>>         > Yes, this is exactly what we are trying at this time ;-)
>>>>         > Either this Full-GC pauses are very long (60-300 seconds)
>>>>         they are much
>>>>         > shorter than after a CMS-failure (when they take
>>>>         sometimes more than
>>>>         > 4000 seconds ;-)) and we can define when they occure.
>>>>
>>>>
>>>>     A full GC that takes an hour is definitely a bug. Have you
>>>>     logged that bug?
>>>>     Or at least share the GC log? What's the version of the JDK
>>>>     that this behaviour
>>>>     was seen with?
>>>>
>>>>     ...
>>>>
>>>>         > Furthermore this should not increase the total-heap-space:
>>>>         > http://java4.info/g1/all.png
>>>>
>>>>
>>>>      Remember that jconsole asynchronously samples the heap, whose
>>>>     size is
>>>>     read "with possible glitches". Rather, you should probably rely
>>>>     on the GC log in order to assess
>>>>     the heap size after each GC event, rather than the asynchronous
>>>>     samples from
>>>>     jconsole. I myself have not had the chance to look at yr GC
>>>>     logs to see
>>>>     what that indicated wrt the size of Eden and of the Heap.
>>>>
>>>>     -- ramki
>>>
>>>
>>>
>>>     _______________________________________________
>>>     hotspot-gc-use mailing list
>>>     hotspot-gc-use at openjdk.java.net  <mailto:hotspot-gc-use at openjdk.java.net>
>>>     http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20111212/fe305029/attachment.htm>


More information about the hotspot-gc-dev mailing list