G1 discovers same garbage again?

Florian Binder java at java4.info
Mon Dec 5 11:19:00 PST 2011


Hi Tony,

inline.

Am 05.12.2011 18:21, schrieb Tony Printezis:
> Florian,
>
> inline.
>
> On 11/30/2011 03:35 AM, Florian Binder wrote:
>>>> The application calculates the whole time with 10 threads some ratios
>>>> (high cpu load). This is done without producing any garbage. About two
>>>> times a minute a request is sent which produce a littlebit garbage.
>>>> Since we are working with realtime data we are interested in very 
>>>> short
>>>> stop-the-world pauses. Therefore we have used the CMS gc in the past
>>>> until we have got problems with fragmentation now.
>>> Since you don't produce much garbage how come you have fragmentation?
>>> Do you keep the results for all the requests you serve?
>> This data is hold one day and every night it is droped and 
>> reinitialized.
>> We have a lot of different server with big memory and have had problems
>> with fragmentation on few of them.
>
> I assume you know when the data will be dropped and reinitialized, 
> right? Can you do a Full GC (with System.gc()) after you re-initialize 
> the heap. This typically helps a lot with CMS.

Yes, this is exactly what we are trying at this time ;-)
Either this Full-GC pauses are very long (60-300 seconds) they are much 
shorter than after a CMS-failure (when they take sometimes more than 
4000 seconds ;-)) and we can define when they occure. Maybe this will be 
our solution. It depends on the result of my g1-experience ;-)

>
>>   This was the cause I am experiencing
>> with g1 in general. I am not sure if we had fragmentation on this one.
>
> G1 should not have fragmentation issues at the small / medium size 
> object level. It might only if you allocate a lot of very large arrays 
> (0.5MB+).

Ah ok, this is good to know. So if we have a lot of 10 MB byte arrays 
g1-might have problems with them? May it helpt to increase the 
G1HeapRegionSize to 20MB? Or is this to large and it would be better to 
break them down into smaller arrays (0.5mb)?

>
>> Today I tried the g1 with another server which surely have had a problem
>> with fragmented heap, but this one did not start wit g1. I got several
>> different exceptions (NoClassDefFound, NullPointerException or even a
>> jvm-crash ;-)). But I think I will write you another email especially
>> for this, because it is started with a lot of special parameters (e.g.
>> -Xms39G -Xmx39G -XX:+UseCompressedOops -XX:ObjectAlignmentInBytes=16
>> -XX:+UseLargePages).
>
> Does it work without compressed oops? I wonder whether we somewhere 
> don't deal with the 16-byte alignment correctly.

I will have more experience on this in the next days. I will tell you if 
I know something new. Maybe it is the combination with UseLargePages.

>
>>>> Therefore I am trying the g1.
>>>>
>>>> This seemed to work very well at first. The stw-pauses were, except 
>>>> the
>>>> cleanup pause,
>>> Out of curiosity: how long are the cleanup pauses?
>> I think they were about 150ms. This is acceptable for me, but in
>> proportion to the garbage-collection of 30ms it is very long and
>> therefore I was wondering.
>
> Well, it's not acceptable to me. ;-)

That is what I wanted to hear ^^

>
>>>> The second cause for my email is the crazy behaviour after a few 
>>>> hours:
>>>> After the startup of the server it uses about 13.5 gb old-gen 
>>>> memory and
>>>> generates very slowly eden-garbage. Since the new allocated memory is
>>>> mostly garbage the (young) garbage collections are very fast and g1
>>>> decides to grow up the eden space. This works 4 times until eden space
>>>> has more than about 3.5 gb memory. After this the gc is making much 
>>>> more
>>>> collections and while the collections it discovers new garbage 
>>>> (probably
>>>> the old one again).
>>> I'm not quite sure what you mean by "it discovers new garbage". For
>>> young GCs, G1 (and our other GCs) will reclaim any young objects that
>>> will discover to be dead (more accurately: that it will not discover
>>> to be live).
>>>
>>>> Eden memory usage jumps between 0 and 3.5gb even
>>>> though I am sure the java-application is not making more than before.
>>> Well, that's not good. :-) Can you try to explicitly set the young gen
>>> size with -Xmn3g say, to see what happens?
>> With "it discovers new garbage" I mean that during the garbage
>> collection the eden space usage jumps up to 3gb. Then it cleans up the
>> whole garbage (eden usage is 0) and a few seconds later the eden usage
>> jumps again up. You can see this in the 1h eden-space snapshot:
>> http://java4.info/g1/eden_1h.png
>> Since the jumps are betweend 0 and the last max eden usage (of about
>> 3.5gb) I assume that it discovers the same garbage, it cleaned up the
>> last time, and collects it again. I am sure the application is not
>> making more garbage than the time before. Have you ever heared of
>> problems like this?
>
> Here's a quick description of how the Eden and the other spaces in G1 
> work. Hopefully, this will help you understand this behavior a bit 
> better.
>
> When new objects are allocated by the application the are placed in 
> the Eden which is a set of regions. When the Eden is full we initiate 
> a collection. During that collection we physically move all the 
> surviving objects from the Eden regions to another set of regions 
> called the Survivors. At the end of a collection the Eden regions have 
> no more live objects (we copied them somewhere else) so we can reclaim 
> them (which is why the Eden is empty after a GC). After a GC new 
> objects are allocated into Eden regions (either new ones, or ones 
> which were reclaimed before; it doesn't matter which) which make the 
> Eden grow again. And when it's full it's again collected, etc.

So, after a (young) gc the eden space should increase only by the new 
allocated objects? Or is it possible that new non empty regions are used 
for the eden space, too?
As you can see at http://java4.info/g1/eden_all.png
from 13:20 until 18:00 the eden space is constantly growing (with a few 
gcs) just by the new objects (allocated by me). But after 18:00 There 
are frequent jumps which are much more than I would ever allocate. So 
what is causing them?

First of all I thought g1 is taking some old-gen-regions to the gc 
because it has enough time to do this, but then I saw that in this cause 
"(partial)" will be append in the out-log:
http://java4.info/g1/out_err.log.gz
Furthermore this should not increase the total-heap-space:
http://java4.info/g1/all.png

Or is it possible that within a young gc only a few of the young regions 
are collected and reclaimed?

Thanks,
Flo

>
> Does this explain better the behavior you see?
>
>> After I have written the last email, I have seen that it has calm itself
>> after a few hours. But it is nevertheless very curious and produces a
>> lot of unnecessary pauses.
>
> They are not really unnecessary. Each pause reclaims a lot of 
> short-lived objects.
>
> Tony
>
>> Flo
>>
>>> Tony
>>>
>>>>    I
>>>> assume that it runs during a collection in the old garbage and 
>>>> collects
>>>> it again. Is this possible? Or is there an overflow since eden space
>>>> uses more than 3.5 gb?
>>>>
>>>> Thanks and regards,
>>>> Flo
>>>>
>>>> Some useful information:
>>>> $ java -version
>>>> java version "1.6.0_29"
>>>> Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
>>>> Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)
>>>>
>>>> Startup Parameters:
>>>> -Xms20g -Xmx20g
>>>> -verbose:gc \
>>>> -XX:+UnlockExperimentalVMOptions \
>>>> -XX:+UseG1GC \
>>>> -XX:+PrintGCDetails \
>>>> -XX:+PrintGCDateStamps \
>>>> -XX:+UseLargePages \
>>>> -XX:+PrintFlagsFinal \
>>>> -XX:-TraceClassUnloading \
>>>>
>>>> $ cat /proc/meminfo | grep Huge
>>>> HugePages_Total: 11264
>>>> HugePages_Free:   1015
>>>> HugePages_Rsvd:     32
>>>> Hugepagesize:     2048 kB
>>>>
>>>> A few screen-shots of the jconsole memory-view:
>>>> http://java4.info/g1/1h.png
>>>> http://java4.info/g1/all.png
>>>> http://java4.info/g1/eden_1h.png
>>>> http://java4.info/g1/eden_all.png
>>>> http://java4.info/g1/oldgen_all.png
>>>>
>>>> The sysout end syserr logfile with the gc logging and PrintFinalFlags:
>>>> http://java4.info/g1/out_err.log.gz
>>>> _______________________________________________
>>>> hotspot-gc-use mailing list
>>>> hotspot-gc-use at openjdk.java.net
>>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net
>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use



More information about the hotspot-gc-use mailing list