From tony.printezis at oracle.com Mon Dec 5 09:21:33 2011 From: tony.printezis at oracle.com (Tony Printezis) Date: Mon, 05 Dec 2011 12:21:33 -0500 Subject: G1 discovers same garbage again? In-Reply-To: <4ED5EAE1.2020102@java4.info> References: <4ED3DE79.5050801@java4.info> <4ED51672.6030105@oracle.com> <4ED5EAE1.2020102@java4.info> Message-ID: <4EDCFD9D.5040005@oracle.com> Florian, inline. On 11/30/2011 03:35 AM, Florian Binder wrote: >>> The application calculates the whole time with 10 threads some ratios >>> (high cpu load). This is done without producing any garbage. About two >>> times a minute a request is sent which produce a littlebit garbage. >>> Since we are working with realtime data we are interested in very short >>> stop-the-world pauses. Therefore we have used the CMS gc in the past >>> until we have got problems with fragmentation now. >> Since you don't produce much garbage how come you have fragmentation? >> Do you keep the results for all the requests you serve? > This data is hold one day and every night it is droped and reinitialized. > We have a lot of different server with big memory and have had problems > with fragmentation on few of them. I assume you know when the data will be dropped and reinitialized, right? Can you do a Full GC (with System.gc()) after you re-initialize the heap. This typically helps a lot with CMS. > This was the cause I am experiencing > with g1 in general. I am not sure if we had fragmentation on this one. G1 should not have fragmentation issues at the small / medium size object level. It might only if you allocate a lot of very large arrays (0.5MB+). > Today I tried the g1 with another server which surely have had a problem > with fragmented heap, but this one did not start wit g1. I got several > different exceptions (NoClassDefFound, NullPointerException or even a > jvm-crash ;-)). But I think I will write you another email especially > for this, because it is started with a lot of special parameters (e.g. > -Xms39G -Xmx39G -XX:+UseCompressedOops -XX:ObjectAlignmentInBytes=16 > -XX:+UseLargePages). Does it work without compressed oops? I wonder whether we somewhere don't deal with the 16-byte alignment correctly. >>> Therefore I am trying the g1. >>> >>> This seemed to work very well at first. The stw-pauses were, except the >>> cleanup pause, >> Out of curiosity: how long are the cleanup pauses? > I think they were about 150ms. This is acceptable for me, but in > proportion to the garbage-collection of 30ms it is very long and > therefore I was wondering. Well, it's not acceptable to me. ;-) >>> The second cause for my email is the crazy behaviour after a few hours: >>> After the startup of the server it uses about 13.5 gb old-gen memory and >>> generates very slowly eden-garbage. Since the new allocated memory is >>> mostly garbage the (young) garbage collections are very fast and g1 >>> decides to grow up the eden space. This works 4 times until eden space >>> has more than about 3.5 gb memory. After this the gc is making much more >>> collections and while the collections it discovers new garbage (probably >>> the old one again). >> I'm not quite sure what you mean by "it discovers new garbage". For >> young GCs, G1 (and our other GCs) will reclaim any young objects that >> will discover to be dead (more accurately: that it will not discover >> to be live). >> >>> Eden memory usage jumps between 0 and 3.5gb even >>> though I am sure the java-application is not making more than before. >> Well, that's not good. :-) Can you try to explicitly set the young gen >> size with -Xmn3g say, to see what happens? > With "it discovers new garbage" I mean that during the garbage > collection the eden space usage jumps up to 3gb. Then it cleans up the > whole garbage (eden usage is 0) and a few seconds later the eden usage > jumps again up. You can see this in the 1h eden-space snapshot: > http://java4.info/g1/eden_1h.png > Since the jumps are betweend 0 and the last max eden usage (of about > 3.5gb) I assume that it discovers the same garbage, it cleaned up the > last time, and collects it again. I am sure the application is not > making more garbage than the time before. Have you ever heared of > problems like this? Here's a quick description of how the Eden and the other spaces in G1 work. Hopefully, this will help you understand this behavior a bit better. When new objects are allocated by the application the are placed in the Eden which is a set of regions. When the Eden is full we initiate a collection. During that collection we physically move all the surviving objects from the Eden regions to another set of regions called the Survivors. At the end of a collection the Eden regions have no more live objects (we copied them somewhere else) so we can reclaim them (which is why the Eden is empty after a GC). After a GC new objects are allocated into Eden regions (either new ones, or ones which were reclaimed before; it doesn't matter which) which make the Eden grow again. And when it's full it's again collected, etc. Does this explain better the behavior you see? > After I have written the last email, I have seen that it has calm itself > after a few hours. But it is nevertheless very curious and produces a > lot of unnecessary pauses. They are not really unnecessary. Each pause reclaims a lot of short-lived objects. Tony > Flo > >> Tony >> >>> I >>> assume that it runs during a collection in the old garbage and collects >>> it again. Is this possible? Or is there an overflow since eden space >>> uses more than 3.5 gb? >>> >>> Thanks and regards, >>> Flo >>> >>> Some useful information: >>> $ java -version >>> java version "1.6.0_29" >>> Java(TM) SE Runtime Environment (build 1.6.0_29-b11) >>> Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode) >>> >>> Startup Parameters: >>> -Xms20g -Xmx20g >>> -verbose:gc \ >>> -XX:+UnlockExperimentalVMOptions \ >>> -XX:+UseG1GC \ >>> -XX:+PrintGCDetails \ >>> -XX:+PrintGCDateStamps \ >>> -XX:+UseLargePages \ >>> -XX:+PrintFlagsFinal \ >>> -XX:-TraceClassUnloading \ >>> >>> $ cat /proc/meminfo | grep Huge >>> HugePages_Total: 11264 >>> HugePages_Free: 1015 >>> HugePages_Rsvd: 32 >>> Hugepagesize: 2048 kB >>> >>> A few screen-shots of the jconsole memory-view: >>> http://java4.info/g1/1h.png >>> http://java4.info/g1/all.png >>> http://java4.info/g1/eden_1h.png >>> http://java4.info/g1/eden_all.png >>> http://java4.info/g1/oldgen_all.png >>> >>> The sysout end syserr logfile with the gc logging and PrintFinalFlags: >>> http://java4.info/g1/out_err.log.gz >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From java at java4.info Mon Dec 5 11:19:00 2011 From: java at java4.info (Florian Binder) Date: Mon, 05 Dec 2011 20:19:00 +0100 Subject: G1 discovers same garbage again? In-Reply-To: <4EDCFD9D.5040005@oracle.com> References: <4ED3DE79.5050801@java4.info> <4ED51672.6030105@oracle.com> <4ED5EAE1.2020102@java4.info> <4EDCFD9D.5040005@oracle.com> Message-ID: <4EDD1924.2030803@java4.info> Hi Tony, inline. Am 05.12.2011 18:21, schrieb Tony Printezis: > Florian, > > inline. > > On 11/30/2011 03:35 AM, Florian Binder wrote: >>>> The application calculates the whole time with 10 threads some ratios >>>> (high cpu load). This is done without producing any garbage. About two >>>> times a minute a request is sent which produce a littlebit garbage. >>>> Since we are working with realtime data we are interested in very >>>> short >>>> stop-the-world pauses. Therefore we have used the CMS gc in the past >>>> until we have got problems with fragmentation now. >>> Since you don't produce much garbage how come you have fragmentation? >>> Do you keep the results for all the requests you serve? >> This data is hold one day and every night it is droped and >> reinitialized. >> We have a lot of different server with big memory and have had problems >> with fragmentation on few of them. > > I assume you know when the data will be dropped and reinitialized, > right? Can you do a Full GC (with System.gc()) after you re-initialize > the heap. This typically helps a lot with CMS. Yes, this is exactly what we are trying at this time ;-) Either this Full-GC pauses are very long (60-300 seconds) they are much shorter than after a CMS-failure (when they take sometimes more than 4000 seconds ;-)) and we can define when they occure. Maybe this will be our solution. It depends on the result of my g1-experience ;-) > >> This was the cause I am experiencing >> with g1 in general. I am not sure if we had fragmentation on this one. > > G1 should not have fragmentation issues at the small / medium size > object level. It might only if you allocate a lot of very large arrays > (0.5MB+). Ah ok, this is good to know. So if we have a lot of 10 MB byte arrays g1-might have problems with them? May it helpt to increase the G1HeapRegionSize to 20MB? Or is this to large and it would be better to break them down into smaller arrays (0.5mb)? > >> Today I tried the g1 with another server which surely have had a problem >> with fragmented heap, but this one did not start wit g1. I got several >> different exceptions (NoClassDefFound, NullPointerException or even a >> jvm-crash ;-)). But I think I will write you another email especially >> for this, because it is started with a lot of special parameters (e.g. >> -Xms39G -Xmx39G -XX:+UseCompressedOops -XX:ObjectAlignmentInBytes=16 >> -XX:+UseLargePages). > > Does it work without compressed oops? I wonder whether we somewhere > don't deal with the 16-byte alignment correctly. I will have more experience on this in the next days. I will tell you if I know something new. Maybe it is the combination with UseLargePages. > >>>> Therefore I am trying the g1. >>>> >>>> This seemed to work very well at first. The stw-pauses were, except >>>> the >>>> cleanup pause, >>> Out of curiosity: how long are the cleanup pauses? >> I think they were about 150ms. This is acceptable for me, but in >> proportion to the garbage-collection of 30ms it is very long and >> therefore I was wondering. > > Well, it's not acceptable to me. ;-) That is what I wanted to hear ^^ > >>>> The second cause for my email is the crazy behaviour after a few >>>> hours: >>>> After the startup of the server it uses about 13.5 gb old-gen >>>> memory and >>>> generates very slowly eden-garbage. Since the new allocated memory is >>>> mostly garbage the (young) garbage collections are very fast and g1 >>>> decides to grow up the eden space. This works 4 times until eden space >>>> has more than about 3.5 gb memory. After this the gc is making much >>>> more >>>> collections and while the collections it discovers new garbage >>>> (probably >>>> the old one again). >>> I'm not quite sure what you mean by "it discovers new garbage". For >>> young GCs, G1 (and our other GCs) will reclaim any young objects that >>> will discover to be dead (more accurately: that it will not discover >>> to be live). >>> >>>> Eden memory usage jumps between 0 and 3.5gb even >>>> though I am sure the java-application is not making more than before. >>> Well, that's not good. :-) Can you try to explicitly set the young gen >>> size with -Xmn3g say, to see what happens? >> With "it discovers new garbage" I mean that during the garbage >> collection the eden space usage jumps up to 3gb. Then it cleans up the >> whole garbage (eden usage is 0) and a few seconds later the eden usage >> jumps again up. You can see this in the 1h eden-space snapshot: >> http://java4.info/g1/eden_1h.png >> Since the jumps are betweend 0 and the last max eden usage (of about >> 3.5gb) I assume that it discovers the same garbage, it cleaned up the >> last time, and collects it again. I am sure the application is not >> making more garbage than the time before. Have you ever heared of >> problems like this? > > Here's a quick description of how the Eden and the other spaces in G1 > work. Hopefully, this will help you understand this behavior a bit > better. > > When new objects are allocated by the application the are placed in > the Eden which is a set of regions. When the Eden is full we initiate > a collection. During that collection we physically move all the > surviving objects from the Eden regions to another set of regions > called the Survivors. At the end of a collection the Eden regions have > no more live objects (we copied them somewhere else) so we can reclaim > them (which is why the Eden is empty after a GC). After a GC new > objects are allocated into Eden regions (either new ones, or ones > which were reclaimed before; it doesn't matter which) which make the > Eden grow again. And when it's full it's again collected, etc. So, after a (young) gc the eden space should increase only by the new allocated objects? Or is it possible that new non empty regions are used for the eden space, too? As you can see at http://java4.info/g1/eden_all.png from 13:20 until 18:00 the eden space is constantly growing (with a few gcs) just by the new objects (allocated by me). But after 18:00 There are frequent jumps which are much more than I would ever allocate. So what is causing them? First of all I thought g1 is taking some old-gen-regions to the gc because it has enough time to do this, but then I saw that in this cause "(partial)" will be append in the out-log: http://java4.info/g1/out_err.log.gz Furthermore this should not increase the total-heap-space: http://java4.info/g1/all.png Or is it possible that within a young gc only a few of the young regions are collected and reclaimed? Thanks, Flo > > Does this explain better the behavior you see? > >> After I have written the last email, I have seen that it has calm itself >> after a few hours. But it is nevertheless very curious and produces a >> lot of unnecessary pauses. > > They are not really unnecessary. Each pause reclaims a lot of > short-lived objects. > > Tony > >> Flo >> >>> Tony >>> >>>> I >>>> assume that it runs during a collection in the old garbage and >>>> collects >>>> it again. Is this possible? Or is there an overflow since eden space >>>> uses more than 3.5 gb? >>>> >>>> Thanks and regards, >>>> Flo >>>> >>>> Some useful information: >>>> $ java -version >>>> java version "1.6.0_29" >>>> Java(TM) SE Runtime Environment (build 1.6.0_29-b11) >>>> Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode) >>>> >>>> Startup Parameters: >>>> -Xms20g -Xmx20g >>>> -verbose:gc \ >>>> -XX:+UnlockExperimentalVMOptions \ >>>> -XX:+UseG1GC \ >>>> -XX:+PrintGCDetails \ >>>> -XX:+PrintGCDateStamps \ >>>> -XX:+UseLargePages \ >>>> -XX:+PrintFlagsFinal \ >>>> -XX:-TraceClassUnloading \ >>>> >>>> $ cat /proc/meminfo | grep Huge >>>> HugePages_Total: 11264 >>>> HugePages_Free: 1015 >>>> HugePages_Rsvd: 32 >>>> Hugepagesize: 2048 kB >>>> >>>> A few screen-shots of the jconsole memory-view: >>>> http://java4.info/g1/1h.png >>>> http://java4.info/g1/all.png >>>> http://java4.info/g1/eden_1h.png >>>> http://java4.info/g1/eden_all.png >>>> http://java4.info/g1/oldgen_all.png >>>> >>>> The sysout end syserr logfile with the gc logging and PrintFinalFlags: >>>> http://java4.info/g1/out_err.log.gz >>>> _______________________________________________ >>>> hotspot-gc-use mailing list >>>> hotspot-gc-use at openjdk.java.net >>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From java at java4.info Tue Dec 6 12:23:41 2011 From: java at java4.info (Florian Binder) Date: Tue, 06 Dec 2011 21:23:41 +0100 Subject: VM-crash after starting with -XX:+UseLargePages -XX:+UseG1GC -XX:G1HeapRegionSize=16M Message-ID: <4EDE79CD.4080507@java4.info> Hi all, I was looking for a short test to reproduce the vm-crash when using ObjectAlignment=16 but found an other combination of options which result in a vm-crash ;-) I have attached the java-source to this email. Please let me know, if I should report this on an other way. Flo The startup command is: $ java -Xmx1g -Xms1g -XX:+UseLargePages -XX:+UseG1GC -XX:G1HeapRegionSize=16M Test # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00002b368c17cfa0, pid=15290, tid=1097242944 # # JRE version: 6.0_29-b11 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 ) # Problematic frame: # V [libjvm.so+0x3b6fa0] CMTask::drain_local_queue(bool)+0xb0 # # An error report file with more information is saved as: # /home/fbr/g1test/hs_err_pid15290.log # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # Abgebrochen -------------- next part -------------- A non-text attachment was scrubbed... Name: Test.java Type: text/x-java Size: 280 bytes Desc: not available Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111206/b698eacd/Test.java From tony.printezis at oracle.com Fri Dec 9 11:32:51 2011 From: tony.printezis at oracle.com (Tony Printezis) Date: Fri, 09 Dec 2011 14:32:51 -0500 Subject: G1 discovers same garbage again? In-Reply-To: <4EDD1924.2030803@java4.info> References: <4ED3DE79.5050801@java4.info> <4ED51672.6030105@oracle.com> <4ED5EAE1.2020102@java4.info> <4EDCFD9D.5040005@oracle.com> <4EDD1924.2030803@java4.info> Message-ID: <4EE26263.3040307@oracle.com> Florian, Inline. On 12/05/2011 02:19 PM, Florian Binder wrote: >> I assume you know when the data will be dropped and reinitialized, >> right? Can you do a Full GC (with System.gc()) after you re-initialize >> the heap. This typically helps a lot with CMS. > Yes, this is exactly what we are trying at this time ;-) > Either this Full-GC pauses are very long (60-300 seconds) they are much > shorter than after a CMS-failure (when they take sometimes more than > 4000 seconds ;-)) and we can define when they occure. Maybe this will be > our solution. It depends on the result of my g1-experience ;-) OK. :-) We'd better make it really good. :-) >>> This was the cause I am experiencing >>> with g1 in general. I am not sure if we had fragmentation on this one. >> G1 should not have fragmentation issues at the small / medium size >> object level. It might only if you allocate a lot of very large arrays >> (0.5MB+). > Ah ok, this is good to know. So if we have a lot of 10 MB byte arrays > g1-might have problems with them? Let me be a bit clearer: if you just allocate such large objects once, then it won't be a problem. If you allocate them and drop them at a rapid rate it might be a problem > May it helpt to increase the > G1HeapRegionSize to 20MB? Or is this to large and it would be better to > break them down into smaller arrays (0.5mb)? First, the heap region size should be a power of 2 (so if you try to set it to 20MB, G1 will actually use 32MB). Also, the heap region size is automatically calculated for a particular heap size. If you enable -XX:+PrintHeapAtGC it should tell you how large it is for your particular app: garbage-first heap total 6144K, used 3185K [0x6e800000, 0x6ee00000, 0xae800000) region size 1024K, 2 young (2048K), 1 survivors (1024K) Generally, avoiding to allocate objects that are larger than the heap region size is good practice as it will eliminate fragmentation issues. However, please also remember that each object has a header too! So if you allocate, say, a byte array of exactly 1MB entries, the object's size is going to be 1MB + the array header size (12 bytes in the 32-bit JVM). So you want to size arrays a bit smaller than the region size (not too small though, as you'll probably waste the rest of the region). >> >> Here's a quick description of how the Eden and the other spaces in G1 >> work. Hopefully, this will help you understand this behavior a bit >> better. >> >> When new objects are allocated by the application the are placed in >> the Eden which is a set of regions. When the Eden is full we initiate >> a collection. During that collection we physically move all the >> surviving objects from the Eden regions to another set of regions >> called the Survivors. At the end of a collection the Eden regions have >> no more live objects (we copied them somewhere else) so we can reclaim >> them (which is why the Eden is empty after a GC). After a GC new >> objects are allocated into Eden regions (either new ones, or ones >> which were reclaimed before; it doesn't matter which) which make the >> Eden grow again. And when it's full it's again collected, etc. > So, after a (young) gc the eden space should increase only by the new > allocated objects? Yes. > Or is it possible that new non empty regions are used > for the eden space, too? Any free region can be used for the eden. Either once that were reclaimed before or new ones. > As you can see at http://java4.info/g1/eden_all.png > from 13:20 until 18:00 the eden space is constantly growing (with a few > gcs) Did you point me to the right graph? The timeline seems to be 18:06 - 18:35. > just by the new objects (allocated by me). But after 18:00 There > are frequent jumps which are much more than I would ever allocate. So > what is causing them? > > First of all I thought g1 is taking some old-gen-regions to the gc > because it has enough time to do this, We generally only do that for a few GCs after each marking phase when we recalculate the liveness information in the old regions and we know which ones are best to collect. > but then I saw that in this cause > "(partial)" will be append in the out-log: > http://java4.info/g1/out_err.log.gz > Furthermore this should not increase the total-heap-space: > http://java4.info/g1/all.png > > Or is it possible that within a young gc only a few of the young regions > are collected and reclaimed? Nope. We definitely collect all the young regions at every young GC. Tony > Thanks, > Flo > >> Does this explain better the behavior you see? >> >>> After I have written the last email, I have seen that it has calm itself >>> after a few hours. But it is nevertheless very curious and produces a >>> lot of unnecessary pauses. >> They are not really unnecessary. Each pause reclaims a lot of >> short-lived objects. >> >> Tony >> >>> Flo >>> >>>> Tony >>>> >>>>> I >>>>> assume that it runs during a collection in the old garbage and >>>>> collects >>>>> it again. Is this possible? Or is there an overflow since eden space >>>>> uses more than 3.5 gb? >>>>> >>>>> Thanks and regards, >>>>> Flo >>>>> >>>>> Some useful information: >>>>> $ java -version >>>>> java version "1.6.0_29" >>>>> Java(TM) SE Runtime Environment (build 1.6.0_29-b11) >>>>> Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode) >>>>> >>>>> Startup Parameters: >>>>> -Xms20g -Xmx20g >>>>> -verbose:gc \ >>>>> -XX:+UnlockExperimentalVMOptions \ >>>>> -XX:+UseG1GC \ >>>>> -XX:+PrintGCDetails \ >>>>> -XX:+PrintGCDateStamps \ >>>>> -XX:+UseLargePages \ >>>>> -XX:+PrintFlagsFinal \ >>>>> -XX:-TraceClassUnloading \ >>>>> >>>>> $ cat /proc/meminfo | grep Huge >>>>> HugePages_Total: 11264 >>>>> HugePages_Free: 1015 >>>>> HugePages_Rsvd: 32 >>>>> Hugepagesize: 2048 kB >>>>> >>>>> A few screen-shots of the jconsole memory-view: >>>>> http://java4.info/g1/1h.png >>>>> http://java4.info/g1/all.png >>>>> http://java4.info/g1/eden_1h.png >>>>> http://java4.info/g1/eden_all.png >>>>> http://java4.info/g1/oldgen_all.png >>>>> >>>>> The sysout end syserr logfile with the gc logging and PrintFinalFlags: >>>>> http://java4.info/g1/out_err.log.gz >>>>> _______________________________________________ >>>>> hotspot-gc-use mailing list >>>>> hotspot-gc-use at openjdk.java.net >>>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From ysr1729 at gmail.com Fri Dec 9 16:34:09 2011 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 9 Dec 2011 16:34:09 -0800 Subject: G1 discovers same garbage again? In-Reply-To: <4EE26263.3040307@oracle.com> References: <4ED3DE79.5050801@java4.info> <4ED51672.6030105@oracle.com> <4ED5EAE1.2020102@java4.info> <4EDCFD9D.5040005@oracle.com> <4EDD1924.2030803@java4.info> <4EE26263.3040307@oracle.com> Message-ID: A couple of things caught my eye.... On 12/05/2011 02:19 PM, Florian Binder wrote: > >> I assume you know when the data will be dropped and reinitialized, > >> right? Can you do a Full GC (with System.gc()) after you re-initialize > >> the heap. This typically helps a lot with CMS. > > Yes, this is exactly what we are trying at this time ;-) > > Either this Full-GC pauses are very long (60-300 seconds) they are much > > shorter than after a CMS-failure (when they take sometimes more than > > 4000 seconds ;-)) and we can define when they occure. > A full GC that takes an hour is definitely a bug. Have you logged that bug? Or at least share the GC log? What's the version of the JDK that this behaviour was seen with? ... > > Furthermore this should not increase the total-heap-space: > > http://java4.info/g1/all.png > Remember that jconsole asynchronously samples the heap, whose size is read "with possible glitches". Rather, you should probably rely on the GC log in order to assess the heap size after each GC event, rather than the asynchronous samples from jconsole. I myself have not had the chance to look at yr GC logs to see what that indicated wrt the size of Eden and of the Heap. -- ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111209/d968cff8/attachment.html From java at java4.info Fri Dec 9 18:08:13 2011 From: java at java4.info (Florian Binder) Date: Sat, 10 Dec 2011 03:08:13 +0100 Subject: G1 discovers same garbage again? In-Reply-To: References: <4ED3DE79.5050801@java4.info> <4ED51672.6030105@oracle.com> <4ED5EAE1.2020102@java4.info> <4EDCFD9D.5040005@oracle.com> <4EDD1924.2030803@java4.info> <4EE26263.3040307@oracle.com> Message-ID: <4EE2BF0D.1050005@java4.info> In the gc log it seems that there is always running the same garbage collection. For example: $ zcat out_err.log.gz | grep 20480M | tail 2011-11-28T19:13:19.482+0100: [GC cleanup 14515M->14515M(20480M), 0.1370060 secs] [ 16394M->12914M(20480M)] [ 16394M->12914M(20480M)] 2011-11-28T19:17:12.509+0100: [GC cleanup 15582M->15582M(20480M), 0.1387230 secs] [ 16394M->12914M(20480M)] [ 16394M->12914M(20480M)] [ 16394M->12914M(20480M)] 2011-11-28T19:21:06.089+0100: [GC cleanup 12978M->12978M(20480M), 0.1344170 secs] [ 16394M->12914M(20480M)] [ 16394M->12914M(20480M)] Therefore I assume this might be a bug ;-) You can download the whole log at: http://java4.info/g1/out_err.log.gz I don't think that we still have the logs of that very long gc, but I will have a look for it at monday. Furthermore I do not think that we have logged there much details of the gc. But I know that this happend on a very special server, which contains more the 30gb of references (yes, just references to other objects). If we run it with CompressedOops we reduce the memory useage to nearly 50%. Flo Am 10.12.2011 01:34, schrieb Srinivas Ramakrishna: > > > A couple of things caught my eye.... > > On 12/05/2011 02:19 PM, Florian Binder wrote: > >> I assume you know when the data will be dropped and reinitialized, > >> right? Can you do a Full GC (with System.gc()) after you > re-initialize > >> the heap. This typically helps a lot with CMS. > > Yes, this is exactly what we are trying at this time ;-) > > Either this Full-GC pauses are very long (60-300 seconds) they > are much > > shorter than after a CMS-failure (when they take sometimes more than > > 4000 seconds ;-)) and we can define when they occure. > > > A full GC that takes an hour is definitely a bug. Have you logged that > bug? > Or at least share the GC log? What's the version of the JDK that this > behaviour > was seen with? > > ... > > > Furthermore this should not increase the total-heap-space: > > http://java4.info/g1/all.png > > > Remember that jconsole asynchronously samples the heap, whose size is > read "with possible glitches". Rather, you should probably rely on the > GC log in order to assess > the heap size after each GC event, rather than the asynchronous > samples from > jconsole. I myself have not had the chance to look at yr GC logs to see > what that indicated wrt the size of Eden and of the Heap. > > -- ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111210/627b7dfc/attachment.html From tony.printezis at oracle.com Mon Dec 12 06:32:25 2011 From: tony.printezis at oracle.com (Tony Printezis) Date: Mon, 12 Dec 2011 09:32:25 -0500 Subject: G1 discovers same garbage again? In-Reply-To: References: <4ED3DE79.5050801@java4.info> <4ED51672.6030105@oracle.com> <4ED5EAE1.2020102@java4.info> <4EDCFD9D.5040005@oracle.com> <4EDD1924.2030803@java4.info> <4EE26263.3040307@oracle.com> Message-ID: <4EE61079.9030807@oracle.com> Ramki., On 12/09/2011 07:34 PM, Srinivas Ramakrishna wrote: > > > A couple of things caught my eye.... > > On 12/05/2011 02:19 PM, Florian Binder wrote: > >> I assume you know when the data will be dropped and reinitialized, > >> right? Can you do a Full GC (with System.gc()) after you > re-initialize > >> the heap. This typically helps a lot with CMS. > > Yes, this is exactly what we are trying at this time ;-) > > Either this Full-GC pauses are very long (60-300 seconds) they > are much > > shorter than after a CMS-failure (when they take sometimes more than > > 4000 seconds ;-)) and we can define when they occure. > > > A full GC that takes an hour is definitely a bug. Good catch, I clearly didn't do the translation. :-) Could it be paging? Tony > Have you logged that bug? > Or at least share the GC log? What's the version of the JDK that this > behaviour > was seen with? > > ... > > > Furthermore this should not increase the total-heap-space: > > http://java4.info/g1/all.png > > > Remember that jconsole asynchronously samples the heap, whose size is > read "with possible glitches". Rather, you should probably rely on the > GC log in order to assess > the heap size after each GC event, rather than the asynchronous > samples from > jconsole. I myself have not had the chance to look at yr GC logs to see > what that indicated wrt the size of Eden and of the Heap. > > -- ramki > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111212/7743ecb1/attachment.html From vitalyd at gmail.com Mon Dec 12 06:45:19 2011 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Mon, 12 Dec 2011 09:45:19 -0500 Subject: G1 discovers same garbage again? In-Reply-To: <4EE61079.9030807@oracle.com> References: <4ED3DE79.5050801@java4.info> <4ED51672.6030105@oracle.com> <4ED5EAE1.2020102@java4.info> <4EDCFD9D.5040005@oracle.com> <4EDD1924.2030803@java4.info> <4EE26263.3040307@oracle.com> <4EE61079.9030807@oracle.com> Message-ID: I wonder - did it actually complete after 4000 seconds? Maybe the process was restarted before it finished but it had been running for that long up to that point; if so maybe GC hit some bug and would never have terminated? I guess something for Florian to answer. On Dec 12, 2011 9:34 AM, "Tony Printezis" wrote: > Ramki., > > On 12/09/2011 07:34 PM, Srinivas Ramakrishna wrote: > > > > A couple of things caught my eye.... > > On 12/05/2011 02:19 PM, Florian Binder wrote: >> >> I assume you know when the data will be dropped and reinitialized, >> >> right? Can you do a Full GC (with System.gc()) after you re-initialize >> >> the heap. This typically helps a lot with CMS. >> > Yes, this is exactly what we are trying at this time ;-) >> > Either this Full-GC pauses are very long (60-300 seconds) they are much >> > shorter than after a CMS-failure (when they take sometimes more than >> > 4000 seconds ;-)) and we can define when they occure. >> > > A full GC that takes an hour is definitely a bug. > > > Good catch, I clearly didn't do the translation. :-) Could it be paging? > > Tony > > Have you logged that bug? > Or at least share the GC log? What's the version of the JDK that this > behaviour > was seen with? > > ... > >> > Furthermore this should not increase the total-heap-space: >> > http://java4.info/g1/all.png >> > > Remember that jconsole asynchronously samples the heap, whose size is > read "with possible glitches". Rather, you should probably rely on the GC > log in order to assess > the heap size after each GC event, rather than the asynchronous samples > from > jconsole. I myself have not had the chance to look at yr GC logs to see > what that indicated wrt the size of Eden and of the Heap. > > -- ramki > > > _______________________________________________ > hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111212/efb3d1e9/attachment.html From tonimenen at gmail.com Mon Dec 12 13:00:41 2011 From: tonimenen at gmail.com (Toni Menendez Lopez) Date: Mon, 12 Dec 2011 22:00:41 +0100 Subject: OpenJDK vs SunJDK Message-ID: Hello all, I want to ask all of you on question ! I am normally working with SunJDK for my application (Java(TM) SE Runtime Environment (build 1.6.0_15-b03)) , but now I want to migrate the openJDK ( OpenJDK 64-Bit Server VM (build 1.6.0-b09, mixed mode)). I have executed the same performance test in SunJDK and OpenJDK and I have experienced that OpenJDK has worst performance that SunJDK. Any of you have got some similar experience ? My O.S is RHEL5.4 Best regards, Toni. From fancyerii at gmail.com Fri Dec 23 03:14:14 2011 From: fancyerii at gmail.com (Li Li) Date: Fri, 23 Dec 2011 19:14:14 +0800 Subject: question about Unsafe.allocateInstance Message-ID: hi all, I want to allocate and free memory as c/c++ language and I found the Unsafe in hotspot vm. I know the danger of using this class. But I still want to try it. allocateMemory and freeMemory are hard to use because I can't deal with primitive types such as int long and need convert them to byte arrays. And I also find a method named allocateInstance, which seems what I want. it just allocate space for this object. I don't know the memory is in heap or direct memory. if it's in direct memory, it should exist a method like freeInstance, but I don't find such one. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111223/ca2afc85/attachment.html From fweimer at bfk.de Fri Dec 23 03:51:05 2011 From: fweimer at bfk.de (Florian Weimer) Date: Fri, 23 Dec 2011 11:51:05 +0000 Subject: question about Unsafe.allocateInstance In-Reply-To: (Li Li's message of "Fri, 23 Dec 2011 19:14:14 +0800") References: Message-ID: <82k45na4ly.fsf@mid.bfk.de> * Li Li: > And I also find a method named allocateInstance, which seems what I > want. allocateInstance() allocates objects on the Java heap, just like new. I'm pretty sure that there is no way at all to allocate instances which are not on the Java heap and subject to garbage collection. -- Florian Weimer BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstra?e 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99 From rednaxelafx at gmail.com Fri Dec 23 08:54:16 2011 From: rednaxelafx at gmail.com (Krystal Mok) Date: Sat, 24 Dec 2011 00:54:16 +0800 Subject: question about Unsafe.allocateInstance In-Reply-To: <82k45na4ly.fsf@mid.bfk.de> References: <82k45na4ly.fsf@mid.bfk.de> Message-ID: Unsafe.allocateInstance() is implemented in HotSpot VM by directly calling JNI's AllocObject() function [1]. An object instance is allocated in the Java heap, but no constructors are invoked for this instance. This method is mainly used to implement BootstrapConstructorAccessorImpl in the class library. Li, what is your original intent for doing explicit memory management in Java? - Kris [1]: http://docs.oracle.com/javase/6/docs/technotes/guides/jni/spec/functions.html#wp16337 On Fri, Dec 23, 2011 at 7:51 PM, Florian Weimer wrote: > * Li Li: > > > And I also find a method named allocateInstance, which seems what I > > want. > > allocateInstance() allocates objects on the Java heap, just like new. > I'm pretty sure that there is no way at all to allocate instances > which are not on the Java heap and subject to garbage collection. > > -- > Florian Weimer > BFK edv-consulting GmbH http://www.bfk.de/ > Kriegsstra?e 100 tel: +49-721-96201-1 > D-76133 Karlsruhe fax: +49-721-96201-99 > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111224/13b8ee11/attachment.html From fancyerii at gmail.com Fri Dec 23 09:46:15 2011 From: fancyerii at gmail.com (Li Li) Date: Sat, 24 Dec 2011 01:46:15 +0800 Subject: question about Unsafe.allocateInstance In-Reply-To: References: <82k45na4ly.fsf@mid.bfk.de> Message-ID: Java's memory management is great. it reduce memory leak. But in some situation, I want to manage it by myself. On Sat, Dec 24, 2011 at 12:54 AM, Krystal Mok wrote: > Unsafe.allocateInstance() is implemented in HotSpot VM by directly calling > JNI's AllocObject() function [1]. > An object instance is allocated in the Java heap, but no constructors are > invoked for this instance. > This method is mainly used to implement BootstrapConstructorAccessorImpl > in the class library. > > Li, what is your original intent for doing explicit memory management in > Java? > > - Kris > > [1]: > http://docs.oracle.com/javase/6/docs/technotes/guides/jni/spec/functions.html#wp16337 > > On Fri, Dec 23, 2011 at 7:51 PM, Florian Weimer wrote: > >> * Li Li: >> >> > And I also find a method named allocateInstance, which seems what I >> > want. >> >> allocateInstance() allocates objects on the Java heap, just like new. >> I'm pretty sure that there is no way at all to allocate instances >> which are not on the Java heap and subject to garbage collection. >> >> -- >> Florian Weimer >> BFK edv-consulting GmbH http://www.bfk.de/ >> Kriegsstra?e 100 tel: +49-721-96201-1 >> D-76133 Karlsruhe fax: +49-721-96201-99 >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111224/48b01e20/attachment.html From taras.tielkes at gmail.com Tue Dec 27 05:07:06 2011 From: taras.tielkes at gmail.com (Taras Tielkes) Date: Tue, 27 Dec 2011 14:07:06 +0100 Subject: Promotion failures: indication of CMS fragmentation? Message-ID: Hi, We're running an application with the CMS/ParNew collectors that is experiencing occasional promotion failures. Environment is Linux 2.6.18 (x64), JVM is 1.6.0_29 in server mode. I've listed the specific JVM options used below (a). The application is deployed across a handful of machines, and the promotion failures are fairly uniform across those. The first kind of failure we observe is a promotion failure during ParNew collection, I've included a snipped from the gc log below (b). The second kind of failure is a concurrrent mode failure (perhaps triggered by the same cause), see (c) below. The frequency (after running for a some weeks) is approximately once per day. This is bearable, but obviously we'd like to improve on this. Apart from high-volume request handling (which allocates a lot of small objects), the application also runs a few dozen background threads that download and process XML documents, typically in the 5-30 MB range. A known deficiency in the existing code is that the XML content is copied twice before processing (once to a byte[], and later again to a String/char[]). Given that a 30 MB XML stream will result in a 60 MB java.lang.String/char[], my suspicion is that these big array allocations are causing us to run into the CMS fragmentation issue. My questions are: 1) Does the data from the GC logs provide sufficient evidence to conclude that CMS fragmentation is the cause of the promotion failure? 2) If not, what's the next step of investigating the cause? 3) We're planning to at least add -XX:+PrintPromotionFailure to get a feeling for the size of the objects that fail promotion. Overall, it seem that -XX:PrintFLSStatistics=1 is actually the only reliable approach to diagnose CMS fragmentation. Is this indeed the case? Thanks in advance, Taras a) Current JVM options: -------------------------------- -server -Xms5g -Xmx5g -Xmn400m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+PrintGCTimeStamps -verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:SurvivorRatio=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 -Xloggc:gc.log -------------------------------- b) Promotion failure during ParNew -------------------------------- 2011-12-08T18:14:40.966+0100: 219729.868: [GC 219729.868: [ParNew: 368640K->40959K(368640K), 0.0693460 secs] 3504917K->3195098K(5201920K), 0.0696500 secs] [Times: user=0.39 sys=0.01, real=0.07 secs] 2011-12-08T18:14:43.778+0100: 219732.679: [GC 219732.679: [ParNew: 368639K->31321K(368640K), 0.0511400 secs] 3522778K->3198316K(5201920K), 0.0514420 secs] [Times: user=0.28 sys=0.00, real=0.05 secs] 2011-12-08T18:14:46.945+0100: 219735.846: [GC 219735.846: [ParNew: 359001K->18694K(368640K), 0.0272970 secs] 3525996K->3185690K(5201920K), 0.0276080 secs] [Times: user=0.19 sys=0.00, real=0.03 secs] 2011-12-08T18:14:49.036+0100: 219737.938: [GC 219737.938: [ParNew (promotion failed): 338813K->361078K(368640K), 0.1321200 secs]219738.070: [CMS: 3167747K->434291K(4833280K), 4.8881570 secs] 3505808K->434291K (5201920K), [CMS Perm : 116893K->116883K(262144K)], 5.0206620 secs] [Times: user=5.24 sys=0.00, real=5.02 secs] 2011-12-08T18:14:54.721+0100: 219743.622: [GC 219743.623: [ParNew: 327680K->40960K(368640K), 0.0949460 secs] 761971K->514584K(5201920K), 0.0952820 secs] [Times: user=0.52 sys=0.04, real=0.10 secs] 2011-12-08T18:14:55.580+0100: 219744.481: [GC 219744.482: [ParNew: 368640K->40960K(368640K), 0.1299190 secs] 842264K->625681K(5201920K), 0.1302190 secs] [Times: user=0.72 sys=0.01, real=0.13 secs] 2011-12-08T18:14:58.050+0100: 219746.952: [GC 219746.952: [ParNew: 368640K->40960K(368640K), 0.0870940 secs] 953361K->684121K(5201920K), 0.0874110 secs] [Times: user=0.48 sys=0.01, real=0.09 secs] -------------------------------- c) Promotion failure during CMS -------------------------------- 2011-12-14T08:29:26.628+0100: 703015.530: [GC 703015.530: [ParNew: 357228K->40960K(368640K), 0.0525110 secs] 3603068K->3312743K(5201920K), 0.0528120 secs] [Times: user=0.37 sys=0.00, real=0.05 secs] 2011-12-14T08:29:28.864+0100: 703017.766: [GC 703017.766: [ParNew: 366075K->37119K(368640K), 0.0479780 secs] 3637859K->3317662K(5201920K), 0.0483090 secs] [Times: user=0.24 sys=0.01, real=0.05 secs] 2011-12-14T08:29:29.553+0100: 703018.454: [GC 703018.455: [ParNew: 364792K->40960K(368640K), 0.0421740 secs] 3645334K->3334944K(5201920K), 0.0424810 secs] [Times: user=0.30 sys=0.00, real=0.04 secs] 2011-12-14T08:29:29.600+0100: 703018.502: [GC [1 CMS-initial-mark: 3293984K(4833280K)] 3335025K(5201920K), 0.0272490 secs] [Times: user=0.02 sys=0.00, real=0.03 secs] 2011-12-14T08:29:29.628+0100: 703018.529: [CMS-concurrent-mark-start] 2011-12-14T08:29:30.718+0100: 703019.620: [GC 703019.620: [ParNew: 368640K->40960K(368640K), 0.0836690 secs] 3662624K->3386039K(5201920K), 0.0839690 secs] [Times: user=0.50 sys=0.01, real=0.08 secs] 2011-12-14T08:29:30.827+0100: 703019.729: [CMS-concurrent-mark: 1.108/1.200 secs] [Times: user=6.83 sys=0.23, real=1.20 secs] 2011-12-14T08:29:30.827+0100: 703019.729: [CMS-concurrent-preclean-start] 2011-12-14T08:29:30.938+0100: 703019.840: [CMS-concurrent-preclean: 0.093/0.111 secs] [Times: user=0.48 sys=0.02, real=0.11 secs] 2011-12-14T08:29:30.938+0100: 703019.840: [CMS-concurrent-abortable-preclean-start] 2011-12-14T08:29:32.337+0100: 703021.239: [CMS-concurrent-abortable-preclean: 1.383/1.399 secs] [Times: user=6.68 sys=0.27, real=1.40 secs] 2011-12-14T08:29:32.343+0100: 703021.244: [GC[YG occupancy: 347750 K (368640 K)]2011-12-14T08:29:32.343+0100: 703021.244: [GC 703021.244: [ParNew (promotion failed): 347750K->347750K(368640K), 9.8729020 secs] 3692829K->3718580K(5201920K), 9.8732380 secs] [Times: user=12.00 sys=2.58, real=9.88 secs] 703031.118: [Rescan (parallel) , 0.2826110 secs]703031.400: [weak refs processing, 0.0014780 secs]703031.402: [class unloading, 0.0176610 secs]703031.419: [scrub symbol & string tables, 0.0094960 secs] [1 CMS -remark: 3370830K(4833280K)] 3718580K(5201920K), 10.1916910 secs] [Times: user=13.73 sys=2.59, real=10.19 secs] 2011-12-14T08:29:42.535+0100: 703031.436: [CMS-concurrent-sweep-start] 2011-12-14T08:29:42.591+0100: 703031.493: [Full GC 703031.493: [CMS2011-12-14T08:29:48.616+0100: 703037.518: [CMS-concurrent-sweep: 6.046/6.082 secs] [Times: user=6.18 sys=0.01, real=6.09 secs] (concurrent mode failure): 3370829K->433437K(4833280K), 10.9594300 secs] 3739469K->433437K(5201920K), [CMS Perm : 121702K->121690K(262144K)], 10.9597540 secs] [Times: user=10.95 sys=0.00, real=10.96 secs] 2011-12-14T08:29:53.997+0100: 703042.899: [GC 703042.899: [ParNew: 327680K->40960K(368640K), 0.0799960 secs] 761117K->517836K(5201920K), 0.0804100 secs] [Times: user=0.46 sys=0.00, real=0.08 secs] 2011-12-14T08:29:54.649+0100: 703043.551: [GC 703043.551: [ParNew: 368640K->40960K(368640K), 0.0784460 secs] 845516K->557872K(5201920K), 0.0787920 secs] [Times: user=0.40 sys=0.01, real=0.08 secs] 2011-12-14T08:29:56.418+0100: 703045.320: [GC 703045.320: [ParNew: 368640K->40960K(368640K), 0.0784040 secs] 885552K->603017K(5201920K), 0.0787630 secs] [Times: user=0.41 sys=0.01, real=0.07 secs] -------------------------------- From jon.masamitsu at oracle.com Tue Dec 27 09:13:16 2011 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Tue, 27 Dec 2011 09:13:16 -0800 Subject: Promotion failures: indication of CMS fragmentation? In-Reply-To: References: Message-ID: <4EF9FCAC.3030208@oracle.com> Taras, PrintPromotionFailure seems like it would go a long way to identify the root of your promotion failures (or at least eliminating some possible causes). I think it would help focus the discussion if you could send the result of that experiment early. Jon On 12/27/2011 5:07 AM, Taras Tielkes wrote: > Hi, > > We're running an application with the CMS/ParNew collectors that is > experiencing occasional promotion failures. > Environment is Linux 2.6.18 (x64), JVM is 1.6.0_29 in server mode. > I've listed the specific JVM options used below (a). > > The application is deployed across a handful of machines, and the > promotion failures are fairly uniform across those. > > The first kind of failure we observe is a promotion failure during > ParNew collection, I've included a snipped from the gc log below (b). > The second kind of failure is a concurrrent mode failure (perhaps > triggered by the same cause), see (c) below. > The frequency (after running for a some weeks) is approximately once > per day. This is bearable, but obviously we'd like to improve on this. > > Apart from high-volume request handling (which allocates a lot of > small objects), the application also runs a few dozen background > threads that download and process XML documents, typically in the 5-30 > MB range. > A known deficiency in the existing code is that the XML content is > copied twice before processing (once to a byte[], and later again to a > String/char[]). > Given that a 30 MB XML stream will result in a 60 MB > java.lang.String/char[], my suspicion is that these big array > allocations are causing us to run into the CMS fragmentation issue. > > My questions are: > 1) Does the data from the GC logs provide sufficient evidence to > conclude that CMS fragmentation is the cause of the promotion failure? > 2) If not, what's the next step of investigating the cause? > 3) We're planning to at least add -XX:+PrintPromotionFailure to get a > feeling for the size of the objects that fail promotion. > Overall, it seem that -XX:PrintFLSStatistics=1 is actually the only > reliable approach to diagnose CMS fragmentation. Is this indeed the > case? > > Thanks in advance, > Taras > > a) Current JVM options: > -------------------------------- > -server > -Xms5g > -Xmx5g > -Xmn400m > -XX:PermSize=256m > -XX:MaxPermSize=256m > -XX:+PrintGCTimeStamps > -verbose:gc > -XX:+PrintGCDateStamps > -XX:+PrintGCDetails > -XX:SurvivorRatio=8 > -XX:+UseConcMarkSweepGC > -XX:+UseParNewGC > -XX:+DisableExplicitGC > -XX:+UseCMSInitiatingOccupancyOnly > -XX:+CMSClassUnloadingEnabled > -XX:+CMSScavengeBeforeRemark > -XX:CMSInitiatingOccupancyFraction=68 > -Xloggc:gc.log > -------------------------------- > > b) Promotion failure during ParNew > -------------------------------- > 2011-12-08T18:14:40.966+0100: 219729.868: [GC 219729.868: [ParNew: > 368640K->40959K(368640K), 0.0693460 secs] > 3504917K->3195098K(5201920K), 0.0696500 secs] [Times: user=0.39 > sys=0.01, real=0.07 secs] > 2011-12-08T18:14:43.778+0100: 219732.679: [GC 219732.679: [ParNew: > 368639K->31321K(368640K), 0.0511400 secs] > 3522778K->3198316K(5201920K), 0.0514420 secs] [Times: user=0.28 > sys=0.00, real=0.05 secs] > 2011-12-08T18:14:46.945+0100: 219735.846: [GC 219735.846: [ParNew: > 359001K->18694K(368640K), 0.0272970 secs] > 3525996K->3185690K(5201920K), 0.0276080 secs] [Times: user=0.19 > sys=0.00, real=0.03 secs] > 2011-12-08T18:14:49.036+0100: 219737.938: [GC 219737.938: [ParNew > (promotion failed): 338813K->361078K(368640K), 0.1321200 > secs]219738.070: [CMS: 3167747K->434291K(4833280K), 4.8881570 secs] > 3505808K->434291K > (5201920K), [CMS Perm : 116893K->116883K(262144K)], 5.0206620 secs] > [Times: user=5.24 sys=0.00, real=5.02 secs] > 2011-12-08T18:14:54.721+0100: 219743.622: [GC 219743.623: [ParNew: > 327680K->40960K(368640K), 0.0949460 secs] 761971K->514584K(5201920K), > 0.0952820 secs] [Times: user=0.52 sys=0.04, real=0.10 secs] > 2011-12-08T18:14:55.580+0100: 219744.481: [GC 219744.482: [ParNew: > 368640K->40960K(368640K), 0.1299190 secs] 842264K->625681K(5201920K), > 0.1302190 secs] [Times: user=0.72 sys=0.01, real=0.13 secs] > 2011-12-08T18:14:58.050+0100: 219746.952: [GC 219746.952: [ParNew: > 368640K->40960K(368640K), 0.0870940 secs] 953361K->684121K(5201920K), > 0.0874110 secs] [Times: user=0.48 sys=0.01, real=0.09 secs] > -------------------------------- > > c) Promotion failure during CMS > -------------------------------- > 2011-12-14T08:29:26.628+0100: 703015.530: [GC 703015.530: [ParNew: > 357228K->40960K(368640K), 0.0525110 secs] > 3603068K->3312743K(5201920K), 0.0528120 secs] [Times: user=0.37 > sys=0.00, real=0.05 secs] > 2011-12-14T08:29:28.864+0100: 703017.766: [GC 703017.766: [ParNew: > 366075K->37119K(368640K), 0.0479780 secs] > 3637859K->3317662K(5201920K), 0.0483090 secs] [Times: user=0.24 > sys=0.01, real=0.05 secs] > 2011-12-14T08:29:29.553+0100: 703018.454: [GC 703018.455: [ParNew: > 364792K->40960K(368640K), 0.0421740 secs] > 3645334K->3334944K(5201920K), 0.0424810 secs] [Times: user=0.30 > sys=0.00, real=0.04 secs] > 2011-12-14T08:29:29.600+0100: 703018.502: [GC [1 CMS-initial-mark: > 3293984K(4833280K)] 3335025K(5201920K), 0.0272490 secs] [Times: > user=0.02 sys=0.00, real=0.03 secs] > 2011-12-14T08:29:29.628+0100: 703018.529: [CMS-concurrent-mark-start] > 2011-12-14T08:29:30.718+0100: 703019.620: [GC 703019.620: [ParNew: > 368640K->40960K(368640K), 0.0836690 secs] > 3662624K->3386039K(5201920K), 0.0839690 secs] [Times: user=0.50 > sys=0.01, real=0.08 secs] > 2011-12-14T08:29:30.827+0100: 703019.729: [CMS-concurrent-mark: > 1.108/1.200 secs] [Times: user=6.83 sys=0.23, real=1.20 secs] > 2011-12-14T08:29:30.827+0100: 703019.729: [CMS-concurrent-preclean-start] > 2011-12-14T08:29:30.938+0100: 703019.840: [CMS-concurrent-preclean: > 0.093/0.111 secs] [Times: user=0.48 sys=0.02, real=0.11 secs] > 2011-12-14T08:29:30.938+0100: 703019.840: > [CMS-concurrent-abortable-preclean-start] > 2011-12-14T08:29:32.337+0100: 703021.239: > [CMS-concurrent-abortable-preclean: 1.383/1.399 secs] [Times: > user=6.68 sys=0.27, real=1.40 secs] > 2011-12-14T08:29:32.343+0100: 703021.244: [GC[YG occupancy: 347750 K > (368640 K)]2011-12-14T08:29:32.343+0100: 703021.244: [GC 703021.244: > [ParNew (promotion failed): 347750K->347750K(368640K), 9.8729020 secs] > 3692829K->3718580K(5201920K), 9.8732380 secs] [Times: user=12.00 > sys=2.58, real=9.88 secs] > 703031.118: [Rescan (parallel) , 0.2826110 secs]703031.400: [weak refs > processing, 0.0014780 secs]703031.402: [class unloading, 0.0176610 > secs]703031.419: [scrub symbol& string tables, 0.0094960 secs] [1 CMS > -remark: 3370830K(4833280K)] 3718580K(5201920K), 10.1916910 secs] > [Times: user=13.73 sys=2.59, real=10.19 secs] > 2011-12-14T08:29:42.535+0100: 703031.436: [CMS-concurrent-sweep-start] > 2011-12-14T08:29:42.591+0100: 703031.493: [Full GC 703031.493: > [CMS2011-12-14T08:29:48.616+0100: 703037.518: [CMS-concurrent-sweep: > 6.046/6.082 secs] [Times: user=6.18 sys=0.01, real=6.09 secs] > (concurrent mode failure): 3370829K->433437K(4833280K), 10.9594300 > secs] 3739469K->433437K(5201920K), [CMS Perm : > 121702K->121690K(262144K)], 10.9597540 secs] [Times: user=10.95 > sys=0.00, real=10.96 secs] > 2011-12-14T08:29:53.997+0100: 703042.899: [GC 703042.899: [ParNew: > 327680K->40960K(368640K), 0.0799960 secs] 761117K->517836K(5201920K), > 0.0804100 secs] [Times: user=0.46 sys=0.00, real=0.08 secs] > 2011-12-14T08:29:54.649+0100: 703043.551: [GC 703043.551: [ParNew: > 368640K->40960K(368640K), 0.0784460 secs] 845516K->557872K(5201920K), > 0.0787920 secs] [Times: user=0.40 sys=0.01, real=0.08 secs] > 2011-12-14T08:29:56.418+0100: 703045.320: [GC 703045.320: [ParNew: > 368640K->40960K(368640K), 0.0784040 secs] 885552K->603017K(5201920K), > 0.0787630 secs] [Times: user=0.41 sys=0.01, real=0.07 secs] > -------------------------------- > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From gkorland at gmail.com Tue Dec 27 13:20:24 2011 From: gkorland at gmail.com (Guy Korland) Date: Tue, 27 Dec 2011 23:20:24 +0200 Subject: Turning off generational GC Message-ID: Hi, I hope this is the right forum for this. It seems like no matter how small we set the young generation, it take more than 20ms. Is there a way turn off generational GC, especially in CMS? Thanks, Guy -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111227/74217d59/attachment.html From gkorland at gmail.com Tue Dec 27 13:27:25 2011 From: gkorland at gmail.com (Guy Korland) Date: Tue, 27 Dec 2011 23:27:25 +0200 Subject: G1 GC Occupancy setting Message-ID: Is there a way to set the G1 Occupancy? We noticed it doesn't really kick in before the JVM is pretty full. Thanks, Guy -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111227/31361817/attachment.html From jon.masamitsu at oracle.com Tue Dec 27 14:06:53 2011 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Tue, 27 Dec 2011 14:06:53 -0800 Subject: Turning off generational GC In-Reply-To: References: Message-ID: <4EFA417D.4070009@oracle.com> For the hotspot garbage collectors the short answer is "no" there is no way to turn off generational GC. I think it's even pretty deeply entrenched in G1 which is only logically generational. If CMS were not generational, in the best cause you would see pauses on the order of 20ms. Maybe much larger although less frequent. More likely would be concurrent mode failures which would lead to full GC's. On 12/27/2011 1:20 PM, Guy Korland wrote: > Hi, > > I hope this is the right forum for this. > It seems like no matter how small we set the young generation, it take more > than 20ms. > Is there a way turn off generational GC, especially in CMS? > > Thanks, > Guy > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111227/99fc1a6a/attachment.html From gkorland at gmail.com Wed Dec 28 13:17:02 2011 From: gkorland at gmail.com (Guy Korland) Date: Wed, 28 Dec 2011 23:17:02 +0200 Subject: hotspot-gc-use Digest, Vol 46, Issue 9 In-Reply-To: References: Message-ID: Thanks for the answer, is there any chance you can also help with my other question? Is there a way with as with cms to control the "occupancy"? Thanks, Guy > --- > Date: Tue, 27 Dec 2011 14:06:53 -0800 > From: Jon Masamitsu > Subject: Re: Turning off generational GC > To: hotspot-gc-use at openjdk.java.net > Message-ID: <4EFA417D.4070009 at oracle.com> > Content-Type: text/plain; charset="iso-8859-1" > > For the hotspot garbage collectors the short answer is "no" there is no > way to > turn off generational GC. I think it's even pretty deeply entrenched in > G1 which > is only logically generational. > > If CMS were not generational, in the best cause you would see pauses > on the order of 20ms. Maybe much larger although less frequent. > More likely would be concurrent mode failures which would lead to > full GC's. > > > On 12/27/2011 1:20 PM, Guy Korland wrote: >> Hi, >> >> I hope this is the right forum for this. >> It seems like no matter how small we set the young generation, it take more >> than 20ms. >> Is there a way turn off generational GC, especially in CMS? >> >> Thanks, >> Guy >> >> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111227/99fc1a6a/attachment-0001.html > > ------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > End of hotspot-gc-use Digest, Vol 46, Issue 9 > ********************************************* > -- Regards, Guy Korland -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111228/5ba157c5/attachment.html From jon.masamitsu at oracle.com Wed Dec 28 15:32:35 2011 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Wed, 28 Dec 2011 15:32:35 -0800 Subject: hotspot-gc-use Digest, Vol 46, Issue 9 In-Reply-To: References: Message-ID: <4EFBA713.8080407@oracle.com> On 12/28/2011 1:17 PM, Guy Korland wrote: > Thanks for the answer, is there any chance you can also help with my other > question? > Is there a way with as with cms to control the "occupancy"? If you mean the occupancy at which CMS starts a collection, try -XX:CMSInitiatingOccupancyFraction=NN where NN is the percentage of the tenured generation at which a CMS collection will start. If you mean something else, please ask again. > Thanks, > Guy > > >> --- >> Date: Tue, 27 Dec 2011 14:06:53 -0800 >> From: Jon Masamitsu >> Subject: Re: Turning off generational GC >> To: hotspot-gc-use at openjdk.java.net >> Message-ID:<4EFA417D.4070009 at oracle.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> For the hotspot garbage collectors the short answer is "no" there is no >> way to >> turn off generational GC. I think it's even pretty deeply entrenched in >> G1 which >> is only logically generational. >> >> If CMS were not generational, in the best cause you would see pauses >> on the order of 20ms. Maybe much larger although less frequent. >> More likely would be concurrent mode failures which would lead to >> full GC's. >> >> >> On 12/27/2011 1:20 PM, Guy Korland wrote: >>> Hi, >>> >>> I hope this is the right forum for this. >>> It seems like no matter how small we set the young generation, it take > more >>> than 20ms. >>> Is there a way turn off generational GC, especially in CMS? >>> >>> Thanks, >>> Guy >>> >>> >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: > http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111227/99fc1a6a/attachment-0001.html >> ------------------------------ >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> >> End of hotspot-gc-use Digest, Vol 46, Issue 9 >> ********************************************* >> From gkorland at gmail.com Thu Dec 29 01:45:41 2011 From: gkorland at gmail.com (Guy Korland) Date: Thu, 29 Dec 2011 11:45:41 +0200 Subject: hotspot-gc-use Digest, Vol 46, Issue 9 In-Reply-To: <4EFBA713.8080407@oracle.com> References: <4EFBA713.8080407@oracle.com> Message-ID: Yes, I'm familiar with this configuration for CMS. Is there a similar configuration for G1? Thanks, Guy On Thu, Dec 29, 2011 at 1:32 AM, Jon Masamitsu wrote: > > > On 12/28/2011 1:17 PM, Guy Korland wrote: > >> Thanks for the answer, is there any chance you can also help with my other >> question? >> Is there a way with as with cms to control the "occupancy"? >> > > If you mean the occupancy at which CMS starts a collection, try > -XX:**CMSInitiatingOccupancyFraction**=NN where NN is the > percentage of the tenured generation at which a CMS collection > will start. > > If you mean something else, please ask again. > > > Thanks, >> Guy >> >> >> --- >>> Date: Tue, 27 Dec 2011 14:06:53 -0800 >>> From: Jon Masamitsu >>> > >>> Subject: Re: Turning off generational GC >>> To: hotspot-gc-use at openjdk.java.**net >>> Message-ID:<4EFA417D.4070009@**oracle.com <4EFA417D.4070009 at oracle.com>> >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> For the hotspot garbage collectors the short answer is "no" there is no >>> way to >>> turn off generational GC. I think it's even pretty deeply entrenched in >>> G1 which >>> is only logically generational. >>> >>> If CMS were not generational, in the best cause you would see pauses >>> on the order of 20ms. Maybe much larger although less frequent. >>> More likely would be concurrent mode failures which would lead to >>> full GC's. >>> >>> >>> On 12/27/2011 1:20 PM, Guy Korland wrote: >>> >>>> Hi, >>>> >>>> I hope this is the right forum for this. >>>> It seems like no matter how small we set the young generation, it take >>>> >>> more >> >>> than 20ms. >>>> Is there a way turn off generational GC, especially in CMS? >>>> >>>> Thanks, >>>> Guy >>>> >>>> >>>> >>>> ______________________________**_________________ >>>> hotspot-gc-use mailing list >>>> hotspot-gc-use at openjdk.java.**net >>>> http://mail.openjdk.java.net/**mailman/listinfo/hotspot-gc-**use >>>> >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: >>> >> http://mail.openjdk.java.net/**pipermail/hotspot-gc-use/** >> attachments/20111227/99fc1a6a/**attachment-0001.html >> >>> ------------------------------ >>> >>> ______________________________**_________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.**net >>> http://mail.openjdk.java.net/**mailman/listinfo/hotspot-gc-**use >>> >>> >>> End of hotspot-gc-use Digest, Vol 46, Issue 9 >>> *********************************************** >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111229/88e640f2/attachment.html From jon.masamitsu at oracle.com Thu Dec 29 06:28:24 2011 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Thu, 29 Dec 2011 06:28:24 -0800 Subject: hotspot-gc-use Digest, Vol 46, Issue 9 In-Reply-To: References: <4EFBA713.8080407@oracle.com> Message-ID: <4EFC7908.2010504@oracle.com> This is close to the CMS flag but is a percent of the entire heap (as opposed to the tenured generation for CMS). -XX:InitiatingHeapOccupancyPercent=NN On 12/29/11 01:45, Guy Korland wrote: > Yes, I'm familiar with this configuration for CMS. > Is there a similar configuration for G1? > > Thanks, > Guy > > > On Thu, Dec 29, 2011 at 1:32 AM, Jon Masamitsu > > wrote: > > > > On 12/28/2011 1:17 PM, Guy Korland wrote: > > Thanks for the answer, is there any chance you can also help > with my other > question? > Is there a way with as with cms to control the "occupancy"? > > > If you mean the occupancy at which CMS starts a collection, try > -XX:CMSInitiatingOccupancyFraction=NN where NN is the > percentage of the tenured generation at which a CMS collection > will start. > > If you mean something else, please ask again. > > > Thanks, > Guy > > > --- > Date: Tue, 27 Dec 2011 14:06:53 -0800 > From: Jon Masamitsu > > Subject: Re: Turning off generational GC > To: hotspot-gc-use at openjdk.java.net > > Message-ID:<4EFA417D.4070009 at oracle.com > > > Content-Type: text/plain; charset="iso-8859-1" > > For the hotspot garbage collectors the short answer is > "no" there is no > way to > turn off generational GC. I think it's even pretty deeply > entrenched in > G1 which > is only logically generational. > > If CMS were not generational, in the best cause you would > see pauses > on the order of 20ms. Maybe much larger although less > frequent. > More likely would be concurrent mode failures which would > lead to > full GC's. > > > On 12/27/2011 1:20 PM, Guy Korland wrote: > > Hi, > > I hope this is the right forum for this. > It seems like no matter how small we set the young > generation, it take > > more > > than 20ms. > Is there a way turn off generational GC, especially in > CMS? > > Thanks, > Guy > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111227/99fc1a6a/attachment-0001.html > > ------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > End of hotspot-gc-use Digest, Vol 46, Issue 9 > ********************************************* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111229/6687e6c0/attachment-0001.html From gkorland at gmail.com Fri Dec 30 07:32:46 2011 From: gkorland at gmail.com (Guy Korland) Date: Fri, 30 Dec 2011 17:32:46 +0200 Subject: hotspot-gc-use Digest, Vol 46, Issue 9 In-Reply-To: <4EFC7908.2010504@oracle.com> References: <4EFBA713.8080407@oracle.com> <4EFC7908.2010504@oracle.com> Message-ID: Thanks, I'll test it. Is there also something similar to - UseCMSInitiatingOccupancyOnly ? Thanks, Guy On Thu, Dec 29, 2011 at 4:28 PM, Jon Masamitsu wrote: > This is close to the CMS flag but is a percent of > the entire heap (as opposed to the tenured > generation for CMS). > > -XX:InitiatingHeapOccupancyPercent=NN > > > On 12/29/11 01:45, Guy Korland wrote: > > Yes, I'm familiar with this configuration for CMS. > Is there a similar configuration for G1? > > Thanks, > Guy > > > On Thu, Dec 29, 2011 at 1:32 AM, Jon Masamitsu wrote: > >> >> >> On 12/28/2011 1:17 PM, Guy Korland wrote: >> >>> Thanks for the answer, is there any chance you can also help with my >>> other >>> question? >>> Is there a way with as with cms to control the "occupancy"? >>> >> >> If you mean the occupancy at which CMS starts a collection, try >> -XX:CMSInitiatingOccupancyFraction=NN where NN is the >> percentage of the tenured generation at which a CMS collection >> will start. >> >> If you mean something else, please ask again. >> >> >> Thanks, >>> Guy >>> >>> >>> --- >>>> Date: Tue, 27 Dec 2011 14:06:53 -0800 >>>> From: Jon Masamitsu >>>> Subject: Re: Turning off generational GC >>>> To: hotspot-gc-use at openjdk.java.net >>>> Message-ID:<4EFA417D.4070009 at oracle.com> >>>> Content-Type: text/plain; charset="iso-8859-1" >>>> >>>> For the hotspot garbage collectors the short answer is "no" there is no >>>> way to >>>> turn off generational GC. I think it's even pretty deeply entrenched in >>>> G1 which >>>> is only logically generational. >>>> >>>> If CMS were not generational, in the best cause you would see pauses >>>> on the order of 20ms. Maybe much larger although less frequent. >>>> More likely would be concurrent mode failures which would lead to >>>> full GC's. >>>> >>>> >>>> On 12/27/2011 1:20 PM, Guy Korland wrote: >>>> >>>>> Hi, >>>>> >>>>> I hope this is the right forum for this. >>>>> It seems like no matter how small we set the young generation, it take >>>>> >>>> more >>> >>>> than 20ms. >>>>> Is there a way turn off generational GC, especially in CMS? >>>>> >>>>> Thanks, >>>>> Guy >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> hotspot-gc-use mailing list >>>>> hotspot-gc-use at openjdk.java.net >>>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>>> >>>> -------------- next part -------------- >>>> An HTML attachment was scrubbed... >>>> URL: >>>> >>> >>> http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111227/99fc1a6a/attachment-0001.html >>> >>>> ------------------------------ >>>> >>>> _______________________________________________ >>>> hotspot-gc-use mailing list >>>> hotspot-gc-use at openjdk.java.net >>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>> >>>> >>>> End of hotspot-gc-use Digest, Vol 46, Issue 9 >>>> ********************************************* >>>> >>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111230/0e99f4a2/attachment.html From jon.masamitsu at oracle.com Fri Dec 30 08:08:37 2011 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Fri, 30 Dec 2011 08:08:37 -0800 Subject: hotspot-gc-use Digest, Vol 46, Issue 9 In-Reply-To: References: <4EFBA713.8080407@oracle.com> <4EFC7908.2010504@oracle.com> Message-ID: <4EFDE205.6010207@oracle.com> Not certain but I don't think there is an equivalent. On 12/30/2011 7:32 AM, Guy Korland wrote: > Thanks, I'll test it. Is there also something similar to - > UseCMSInitiatingOccupancyOnly ? > > Thanks, > Guy > > > > On Thu, Dec 29, 2011 at 4:28 PM, Jon Masamitsuwrote: > >> This is close to the CMS flag but is a percent of >> the entire heap (as opposed to the tenured >> generation for CMS). >> >> -XX:InitiatingHeapOccupancyPercent=NN >> >> >> On 12/29/11 01:45, Guy Korland wrote: >> >> Yes, I'm familiar with this configuration for CMS. >> Is there a similar configuration for G1? >> >> Thanks, >> Guy >> >> >> On Thu, Dec 29, 2011 at 1:32 AM, Jon Masamitsuwrote: >> >>> >>> On 12/28/2011 1:17 PM, Guy Korland wrote: >>> >>>> Thanks for the answer, is there any chance you can also help with my >>>> other >>>> question? >>>> Is there a way with as with cms to control the "occupancy"? >>>> >>> If you mean the occupancy at which CMS starts a collection, try >>> -XX:CMSInitiatingOccupancyFraction=NN where NN is the >>> percentage of the tenured generation at which a CMS collection >>> will start. >>> >>> If you mean something else, please ask again. >>> >>> >>> Thanks, >>>> Guy >>>> >>>> >>>> --- >>>>> Date: Tue, 27 Dec 2011 14:06:53 -0800 >>>>> From: Jon Masamitsu >>>>> Subject: Re: Turning off generational GC >>>>> To: hotspot-gc-use at openjdk.java.net >>>>> Message-ID:<4EFA417D.4070009 at oracle.com> >>>>> Content-Type: text/plain; charset="iso-8859-1" >>>>> >>>>> For the hotspot garbage collectors the short answer is "no" there is no >>>>> way to >>>>> turn off generational GC. I think it's even pretty deeply entrenched in >>>>> G1 which >>>>> is only logically generational. >>>>> >>>>> If CMS were not generational, in the best cause you would see pauses >>>>> on the order of 20ms. Maybe much larger although less frequent. >>>>> More likely would be concurrent mode failures which would lead to >>>>> full GC's. >>>>> >>>>> >>>>> On 12/27/2011 1:20 PM, Guy Korland wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I hope this is the right forum for this. >>>>>> It seems like no matter how small we set the young generation, it take >>>>>> >>>>> more >>>>> than 20ms. >>>>>> Is there a way turn off generational GC, especially in CMS? >>>>>> >>>>>> Thanks, >>>>>> Guy >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> hotspot-gc-use mailing list >>>>>> hotspot-gc-use at openjdk.java.net >>>>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>>>> >>>>> -------------- next part -------------- >>>>> An HTML attachment was scrubbed... >>>>> URL: >>>>> >>>> http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111227/99fc1a6a/attachment-0001.html >>>> >>>>> ------------------------------ >>>>> >>>>> _______________________________________________ >>>>> hotspot-gc-use mailing list >>>>> hotspot-gc-use at openjdk.java.net >>>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>>> >>>>> >>>>> End of hotspot-gc-use Digest, Vol 46, Issue 9 >>>>> ********************************************* >>>>> >>>>> From ysr1729 at gmail.com Fri Dec 30 13:26:43 2011 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 30 Dec 2011 13:26:43 -0800 Subject: hotspot-gc-use Digest, Vol 46, Issue 9 In-Reply-To: <4EFDE205.6010207@oracle.com> References: <4EFBA713.8080407@oracle.com> <4EFC7908.2010504@oracle.com> <4EFDE205.6010207@oracle.com> Message-ID: As of sometime ago the only way to control G1 initiation was indeed via this flag, and in fact G1 did not ergonomically kick off a collection based on other factors (such as promotion rate) like CMS does. This may or may not have changed in the last two months, but someone like John Cuthbertson or Tony Printezis would know for sure. -- ramki On Fri, Dec 30, 2011 at 8:08 AM, Jon Masamitsu wrote: > Not certain but I don't think there is an equivalent. > > On 12/30/2011 7:32 AM, Guy Korland wrote: > > Thanks, I'll test it. Is there also something similar to - > > UseCMSInitiatingOccupancyOnly ? > > > > Thanks, > > Guy > > > > > > > > On Thu, Dec 29, 2011 at 4:28 PM, Jon Masamitsu >wrote: > > > >> This is close to the CMS flag but is a percent of > >> the entire heap (as opposed to the tenured > >> generation for CMS). > >> > >> -XX:InitiatingHeapOccupancyPercent=NN > >> > >> > >> On 12/29/11 01:45, Guy Korland wrote: > >> > >> Yes, I'm familiar with this configuration for CMS. > >> Is there a similar configuration for G1? > >> > >> Thanks, > >> Guy > >> > >> > >> On Thu, Dec 29, 2011 at 1:32 AM, Jon Masamitsu >wrote: > >> > >>> > >>> On 12/28/2011 1:17 PM, Guy Korland wrote: > >>> > >>>> Thanks for the answer, is there any chance you can also help with my > >>>> other > >>>> question? > >>>> Is there a way with as with cms to control the "occupancy"? > >>>> > >>> If you mean the occupancy at which CMS starts a collection, try > >>> -XX:CMSInitiatingOccupancyFraction=NN where NN is the > >>> percentage of the tenured generation at which a CMS collection > >>> will start. > >>> > >>> If you mean something else, please ask again. > >>> > >>> > >>> Thanks, > >>>> Guy > >>>> > >>>> > >>>> --- > >>>>> Date: Tue, 27 Dec 2011 14:06:53 -0800 > >>>>> From: Jon Masamitsu > >>>>> Subject: Re: Turning off generational GC > >>>>> To: hotspot-gc-use at openjdk.java.net > >>>>> Message-ID:<4EFA417D.4070009 at oracle.com> > >>>>> Content-Type: text/plain; charset="iso-8859-1" > >>>>> > >>>>> For the hotspot garbage collectors the short answer is "no" there is > no > >>>>> way to > >>>>> turn off generational GC. I think it's even pretty deeply > entrenched in > >>>>> G1 which > >>>>> is only logically generational. > >>>>> > >>>>> If CMS were not generational, in the best cause you would see pauses > >>>>> on the order of 20ms. Maybe much larger although less frequent. > >>>>> More likely would be concurrent mode failures which would lead to > >>>>> full GC's. > >>>>> > >>>>> > >>>>> On 12/27/2011 1:20 PM, Guy Korland wrote: > >>>>> > >>>>>> Hi, > >>>>>> > >>>>>> I hope this is the right forum for this. > >>>>>> It seems like no matter how small we set the young generation, it > take > >>>>>> > >>>>> more > >>>>> than 20ms. > >>>>>> Is there a way turn off generational GC, especially in CMS? > >>>>>> > >>>>>> Thanks, > >>>>>> Guy > >>>>>> > >>>>>> > >>>>>> > >>>>>> _______________________________________________ > >>>>>> hotspot-gc-use mailing list > >>>>>> hotspot-gc-use at openjdk.java.net > >>>>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > >>>>>> > >>>>> -------------- next part -------------- > >>>>> An HTML attachment was scrubbed... > >>>>> URL: > >>>>> > >>>> > http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111227/99fc1a6a/attachment-0001.html > >>>> > >>>>> ------------------------------ > >>>>> > >>>>> _______________________________________________ > >>>>> hotspot-gc-use mailing list > >>>>> hotspot-gc-use at openjdk.java.net > >>>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > >>>>> > >>>>> > >>>>> End of hotspot-gc-use Digest, Vol 46, Issue 9 > >>>>> ********************************************* > >>>>> > >>>>> > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20111230/8cebac29/attachment.html