G1 discovers same garbage again?
tony.printezis at oracle.com
Mon Dec 5 17:21:33 UTC 2011
On 11/30/2011 03:35 AM, Florian Binder wrote:
>>> The application calculates the whole time with 10 threads some ratios
>>> (high cpu load). This is done without producing any garbage. About two
>>> times a minute a request is sent which produce a littlebit garbage.
>>> Since we are working with realtime data we are interested in very short
>>> stop-the-world pauses. Therefore we have used the CMS gc in the past
>>> until we have got problems with fragmentation now.
>> Since you don't produce much garbage how come you have fragmentation?
>> Do you keep the results for all the requests you serve?
> This data is hold one day and every night it is droped and reinitialized.
> We have a lot of different server with big memory and have had problems
> with fragmentation on few of them.
I assume you know when the data will be dropped and reinitialized,
right? Can you do a Full GC (with System.gc()) after you re-initialize
the heap. This typically helps a lot with CMS.
> This was the cause I am experiencing
> with g1 in general. I am not sure if we had fragmentation on this one.
G1 should not have fragmentation issues at the small / medium size
object level. It might only if you allocate a lot of very large arrays
> Today I tried the g1 with another server which surely have had a problem
> with fragmented heap, but this one did not start wit g1. I got several
> different exceptions (NoClassDefFound, NullPointerException or even a
> jvm-crash ;-)). But I think I will write you another email especially
> for this, because it is started with a lot of special parameters (e.g.
> -Xms39G -Xmx39G -XX:+UseCompressedOops -XX:ObjectAlignmentInBytes=16
Does it work without compressed oops? I wonder whether we somewhere
don't deal with the 16-byte alignment correctly.
>>> Therefore I am trying the g1.
>>> This seemed to work very well at first. The stw-pauses were, except the
>>> cleanup pause,
>> Out of curiosity: how long are the cleanup pauses?
> I think they were about 150ms. This is acceptable for me, but in
> proportion to the garbage-collection of 30ms it is very long and
> therefore I was wondering.
Well, it's not acceptable to me. ;-)
>>> The second cause for my email is the crazy behaviour after a few hours:
>>> After the startup of the server it uses about 13.5 gb old-gen memory and
>>> generates very slowly eden-garbage. Since the new allocated memory is
>>> mostly garbage the (young) garbage collections are very fast and g1
>>> decides to grow up the eden space. This works 4 times until eden space
>>> has more than about 3.5 gb memory. After this the gc is making much more
>>> collections and while the collections it discovers new garbage (probably
>>> the old one again).
>> I'm not quite sure what you mean by "it discovers new garbage". For
>> young GCs, G1 (and our other GCs) will reclaim any young objects that
>> will discover to be dead (more accurately: that it will not discover
>> to be live).
>>> Eden memory usage jumps between 0 and 3.5gb even
>>> though I am sure the java-application is not making more than before.
>> Well, that's not good. :-) Can you try to explicitly set the young gen
>> size with -Xmn3g say, to see what happens?
> With "it discovers new garbage" I mean that during the garbage
> collection the eden space usage jumps up to 3gb. Then it cleans up the
> whole garbage (eden usage is 0) and a few seconds later the eden usage
> jumps again up. You can see this in the 1h eden-space snapshot:
> Since the jumps are betweend 0 and the last max eden usage (of about
> 3.5gb) I assume that it discovers the same garbage, it cleaned up the
> last time, and collects it again. I am sure the application is not
> making more garbage than the time before. Have you ever heared of
> problems like this?
Here's a quick description of how the Eden and the other spaces in G1
work. Hopefully, this will help you understand this behavior a bit better.
When new objects are allocated by the application the are placed in the
Eden which is a set of regions. When the Eden is full we initiate a
collection. During that collection we physically move all the surviving
objects from the Eden regions to another set of regions called the
Survivors. At the end of a collection the Eden regions have no more live
objects (we copied them somewhere else) so we can reclaim them (which is
why the Eden is empty after a GC). After a GC new objects are allocated
into Eden regions (either new ones, or ones which were reclaimed before;
it doesn't matter which) which make the Eden grow again. And when it's
full it's again collected, etc.
Does this explain better the behavior you see?
> After I have written the last email, I have seen that it has calm itself
> after a few hours. But it is nevertheless very curious and produces a
> lot of unnecessary pauses.
They are not really unnecessary. Each pause reclaims a lot of
>>> assume that it runs during a collection in the old garbage and collects
>>> it again. Is this possible? Or is there an overflow since eden space
>>> uses more than 3.5 gb?
>>> Thanks and regards,
>>> Some useful information:
>>> $ java -version
>>> java version "1.6.0_29"
>>> Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
>>> Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)
>>> Startup Parameters:
>>> -Xms20g -Xmx20g
>>> -verbose:gc \
>>> -XX:+UnlockExperimentalVMOptions \
>>> -XX:+UseG1GC \
>>> -XX:+PrintGCDetails \
>>> -XX:+PrintGCDateStamps \
>>> -XX:+UseLargePages \
>>> -XX:+PrintFlagsFinal \
>>> -XX:-TraceClassUnloading \
>>> $ cat /proc/meminfo | grep Huge
>>> HugePages_Total: 11264
>>> HugePages_Free: 1015
>>> HugePages_Rsvd: 32
>>> Hugepagesize: 2048 kB
>>> A few screen-shots of the jconsole memory-view:
>>> The sysout end syserr logfile with the gc logging and PrintFinalFlags:
>>> hotspot-gc-use mailing list
>>> hotspot-gc-use at openjdk.java.net
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
More information about the hotspot-gc-dev