From Peter.Kessler at Sun.COM Thu Oct 2 12:05:47 2008 From: Peter.Kessler at Sun.COM (Peter B. Kessler) Date: Thu, 02 Oct 2008 12:05:47 -0700 Subject: Sun > Forums > Isolating a PermGen error Message-ID: <48E51B8B.6010100@Sun.COM> Transferring this thread to hotspot-gc-use (as requested) to get advice from the experts on how to diagnose class loader problems. I would start by adding -XX:+TraceClassLoading -XX:+TraceClassUnloading to the command line and seeing if there were obvious classes that get loaded but not unloaded. I don't know of any available statistics for the size of the interned String table. But it is all open source, so you could add something. (Warning: That code is ugly, and it's among the oldest code in the VM.) But likely other people have other, better ideas. ... peter > Subject: Re: Sun > Forums > Isolating a PermGen error > Date: Thu, 02 Oct 2008 21:53:37 +0300 > From: Eyal Zfira > To: Peter B. Kessler > > The PermGen usage leap is caused by running a crystal report for the 1st > time. I noticed (by analyzing a heap dump) that there's a crystal thread > keeping some objects alive. My problem is that the PermGen usage keeps > raising with every report I generate (I have an open thread with Crystal > as well). I also noticed (using JConsole) that the loaded classes are > not kept still while the PermGen usage still rises. Is there a way to > find out whether it's interned strings or class problem? What is the > best way to "attack" a misused classloader problem? > > Eyal > > On Thu, Oct 2, 2008 at 9:09 PM, Peter B. Kessler > wrote: > > Thanks for the detailed log. > > You are, as you suspected, running out of permanent generation > space. That doesn't tell you why you are filling that space, but it > does tell you where to look: loaded classes and interned Strings. > > Except for the collection at 191.200, none of your collections frees > up any real space in the permanent generation, and that's after a > run up from ~40MB to 64MB from 183.652 to 190.850. Can you > correlate that part of the run with things your application is > doing? (That might also be a red herring.) > > If those objects are live (that is, if the collector is doing its > job, which I think it is), then you will have to figure out why they > are being retained. That's real work. If you were expecting > classes to be unloaded, you'll have to find why they are being > retained. Usually that turns out to be because of the way you've > split class up among classloaders. If you are interning that many > Strings, then you'll have to increase the size of the permanent > generation, or write your own equivalent of String.intern (think > WeakHashMap) so the unique copies of the Strings are kept in the > heap, not in the permanent generation. > > ... peter > > Eyal Zfira wrote: > > Peter, > > First of all, thanks for the quick response. > You are more than welcome to publish this discussion (I'll > subscribe to the mailing list as well). > > Here is the more detailed GC log. I'm also monitoring the > application using JConsole and I keep seeing the PermGen memory > raising. .... From eyalzf at gmail.com Sun Oct 5 05:13:10 2008 From: eyalzf at gmail.com (Eyal Zfira) Date: Sun, 5 Oct 2008 14:13:10 +0200 Subject: Sun > Forums > Isolating a PermGen error Message-ID: <62909b7c0810050513q35c39628if266ae280fff90bf@mail.gmail.com> I added these parameters to see which classes are loaded and unloaded. Here is the command line parameters: -Dprogram.name=run.bat -Djava.awt.headless=true -ea -Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=8000,suspend=n,server=y -DforceStart=false -DsaveCache=false -DdeleteMessages=false -Djavax.management.builder.initial=org.jboss.system.server.jmx.MBeanServerBuilderImpl -Djboss.platform.mbeanserver -Dcom.sun.management.jmxremote -Xloggc:c:\loggc.txt -XX:+PrintGCDetails -XX:+TraceClassLoading -XX:+TraceClassUnloading -Xms512m -Xmx512m -Djava.library.path=../lib/ -Dwrapper.key=N75jMiF5OGsUNzXK -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=3272 -Dwrapper.version=3.2.3 -Dwrapper.native_library=wrapper -Dwrapper.cpu.timeout=600 -Dwrapper.jvmid=1 I see that after Crystal classes are being loaded for the first time (3-4k classes) there are no additional classes loaded. But I still see the PermGen memory raising. Does that means I must have a String.intern() problem? Thanks, Eyal -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20081005/c4ba57c3/attachment.html From michael.finocchiaro at gmail.com Mon Oct 6 06:45:39 2008 From: michael.finocchiaro at gmail.com (Michael Finocchiaro) Date: Mon, 6 Oct 2008 15:45:39 +0200 Subject: JNI implications on HotSpot Message-ID: <8b61e5430810060645t3c0c9fd2ub0de176dd3bae92d@mail.gmail.com> I was wondering if there were any thoughts, or better yet, white papers, on the performance implications of JNI code on HotSpot performance and configuration. The malloc()s in the native JNI code are allocated on the Eden heap with other Java objects and subject to the same rules? Or not? Any boundary conditions to beware of? Google was not particularly helpful so I am coming to the experts :) Cheers, Fino Michael Finocchiaro michael.finocchiaro at gmail.com Mobile Telephone: +33 6 85 46 07 62 MSN: le_fino at hotmail.com Blog: http://mfinocchiaro.wordpress.com Bookmarks: http://del.icio.us/michael.finocchiaro -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20081006/570d9d68/attachment.html From rainer.jung at kippdata.de Sun Oct 5 06:28:56 2008 From: rainer.jung at kippdata.de (Rainer Jung) Date: Sun, 05 Oct 2008 15:28:56 +0200 Subject: More PrintGCApplicationStoppedTime messages in Java 6 Message-ID: <48E8C118.9010302@kippdata.de> Hi, the number of messages Total time for which application threads were stopped: X.Y seconds produced by PrintGCApplicationStoppedTime increased a lot between Java 5 and 6. All of the additional messages refer to extremely short pause times, and none of the prints anything when adding PrintHeadAtGC, so it looks like they are not directly related to actually running a GC task. Can someone give an indication, what other typical reasons for those pauses exist? I tried to track it back from vm/runtime/vmThread.cpp, but wasn't really successful. Regards, Rainer From KDRoper at detica.com Mon Oct 6 07:16:28 2008 From: KDRoper at detica.com (Kenneth Roper) Date: Mon, 6 Oct 2008 15:16:28 +0100 Subject: JNI implications on HotSpot References: <8b61e5430810060645t3c0c9fd2ub0de176dd3bae92d@mail.gmail.com> Message-ID: I wrote a blog entry on one of the pitfalls of extensive usage of JNI objects on your memory profile in the JVM, which may manifest itself with unexpected OutOfMemoryErrors: http://www.codingthearchitecture.com/2008/01/14/jvm_lies_the_outofmemory _myth.html It references a second article about some of the pitfalls of designing an API for use over JNI: http://www.codingthearchitecture.com/2008/01/08/the_clash_of_the_paradig ms.html Hope you find them interesting. Regards Kenneth ________________________________ From: hotspot-gc-use-bounces at openjdk.java.net [mailto:hotspot-gc-use-bounces at openjdk.java.net] On Behalf Of Michael Finocchiaro Sent: 06 October 2008 14:46 To: hotspot-gc-use at openjdk.java.net Subject: JNI implications on HotSpot I was wondering if there were any thoughts, or better yet, white papers, on the performance implications of JNI code on HotSpot performance and configuration. The malloc()s in the native JNI code are allocated on the Eden heap with other Java objects and subject to the same rules? Or not? Any boundary conditions to beware of? Google was not particularly helpful so I am coming to the experts :) Cheers, Fino Michael Finocchiaro michael.finocchiaro at gmail.com Mobile Telephone: +33 6 85 46 07 62 MSN: le_fino at hotmail.com Blog: http://mfinocchiaro.wordpress.com Bookmarks: http://del.icio.us/michael.finocchiaro This message should be regarded as confidential. If you have received this email in error please notify the sender and destroy it immediately. Statements of intent shall only become binding when confirmed in hard copy by an authorised signatory. The contents of this email may relate to dealings with other companies within the Detica Group plc group of companies. Detica Limited is registered in England under No: 1337451. Registered offices: Surrey Research Park, Guildford, Surrey, GU2 7YP, England. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20081006/cb04a830/attachment.html From Antonios.Printezis at sun.com Mon Oct 6 07:41:36 2008 From: Antonios.Printezis at sun.com (Tony Printezis) Date: Mon, 06 Oct 2008 10:41:36 -0400 Subject: More PrintGCApplicationStoppedTime messages in Java 6 In-Reply-To: <48E8C118.9010302@kippdata.de> References: <48E8C118.9010302@kippdata.de> Message-ID: <48EA23A0.4030600@sun.com> Rainer, Which version of 5 were you using? They could be biased locking revocation safepoints. Try running with -XX:-UseBiasedLocking to see if they occur. Biased locking is an important and effective optimization in most cases and the extra safepoints that it causes are generally benign and nothing to worry about. Tony Rainer Jung wrote: > Hi, > > the number of messages > > Total time for which application threads were stopped: X.Y seconds > > produced by PrintGCApplicationStoppedTime increased a lot between Java 5 > and 6. All of the additional messages refer to extremely short pause > times, and none of the prints anything when adding PrintHeadAtGC, so it > looks like they are not directly related to actually running a GC task. > > Can someone give an indication, what other typical reasons for those > pauses exist? I tried to track it back from vm/runtime/vmThread.cpp, but > wasn't really successful. > > Regards, > > Rainer > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From Antonios.Printezis at sun.com Mon Oct 6 07:43:23 2008 From: Antonios.Printezis at sun.com (Tony Printezis) Date: Mon, 06 Oct 2008 10:43:23 -0400 Subject: JNI implications on HotSpot In-Reply-To: <8b61e5430810060645t3c0c9fd2ub0de176dd3bae92d@mail.gmail.com> References: <8b61e5430810060645t3c0c9fd2ub0de176dd3bae92d@mail.gmail.com> Message-ID: <48EA240B.6070001@sun.com> Hi. Michael Finocchiaro wrote: > I was wondering if there were any thoughts, or better yet, white > papers, on the performance implications of JNI code on HotSpot > performance and configuration. The malloc()s in the native JNI code > are allocated on the Eden heap with other Java objects and subject to > the same rules? Absolutely not. The Java heap is mmaped and we use custom memory management (GC!) and allocators for it; malloc uses a different space for allocations. Tony > Or not? Any boundary conditions to beware of? Google was not > particularly helpful so I am coming to the experts :) > Cheers, > Fino > > > > Michael Finocchiaro > michael.finocchiaro at gmail.com > Mobile Telephone: +33 6 85 46 07 62 > MSN: le_fino at hotmail.com > Blog: http://mfinocchiaro.wordpress.com > Bookmarks: http://del.icio.us/michael.finocchiaro > ------------------------------------------------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From michael.finocchiaro at gmail.com Mon Oct 6 08:53:44 2008 From: michael.finocchiaro at gmail.com (Michael Finocchiaro) Date: Mon, 6 Oct 2008 17:53:44 +0200 Subject: JNI implications on HotSpot In-Reply-To: <48EA240B.6070001@sun.com> References: <8b61e5430810060645t3c0c9fd2ub0de176dd3bae92d@mail.gmail.com> <48EA240B.6070001@sun.com> Message-ID: <8b61e5430810060853j50876020x3f4dd23d2dcc50@mail.gmail.com> So, is there a white paper that describes in detail the differences between memory management for Java objects and for native objects via JNI code? How does the garbage collection work for the JNI objects and thus how does one tune a JVM when the majority of code is (!groan!) C++ loaded into an application server via a war and initiated via JNI? There must be some best practices out there, or at least some description of the differences between Java object memory management and native object memory management. At least I hope there is...otherwise I suppose I'll be treading water... Cheers, Fino Michael Finocchiaro michael.finocchiaro at gmail.com Mobile Telephone: +33 6 85 46 07 62 MSN: le_fino at hotmail.com Blog: http://mfinocchiaro.wordpress.com Bookmarks: http://del.icio.us/michael.finocchiaro On Mon, Oct 6, 2008 at 4:43 PM, Tony Printezis wrote: > Hi. > > Michael Finocchiaro wrote: > >> I was wondering if there were any thoughts, or better yet, white papers, >> on the performance implications of JNI code on HotSpot performance and >> configuration. The malloc()s in the native JNI code are allocated on the >> Eden heap with other Java objects and subject to the same rules? >> > Absolutely not. The Java heap is mmaped and we use custom memory management > (GC!) and allocators for it; malloc uses a different space for allocations. > > Tony > >> Or not? Any boundary conditions to beware of? Google was not particularly >> helpful so I am coming to the experts :) >> Cheers, >> Fino >> >> >> >> Michael Finocchiaro >> michael.finocchiaro at gmail.com >> Mobile Telephone: +33 6 85 46 07 62 >> MSN: le_fino at hotmail.com >> Blog: http://mfinocchiaro.wordpress.com >> Bookmarks: http://del.icio.us/michael.finocchiaro >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> > > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | > | | MS BUR02-311 | > | e-mail: tony.printezis at sun.com | 35 Network Drive | > | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20081006/02c001f6/attachment.html From Antonios.Printezis at sun.com Mon Oct 6 09:22:57 2008 From: Antonios.Printezis at sun.com (Tony Printezis) Date: Mon, 06 Oct 2008 12:22:57 -0400 Subject: JNI implications on HotSpot In-Reply-To: <8b61e5430810060853j50876020x3f4dd23d2dcc50@mail.gmail.com> References: <8b61e5430810060645t3c0c9fd2ub0de176dd3bae92d@mail.gmail.com> <48EA240B.6070001@sun.com> <8b61e5430810060853j50876020x3f4dd23d2dcc50@mail.gmail.com> Message-ID: <48EA3B61.3060201@sun.com> Hi, Michael Finocchiaro wrote: > So, is there a white paper that describes in detail the differences > between memory management for Java objects and for native objects via > JNI code? I don't know if there is one. But the difference is very simple. Objects allocated from Java and native malloc'ed objects are totally different, reside on different memory areas, and are managed by different mechanisms. Java objects are managed by the GC, malloc'ed objects need to be explicitly freed (like any other malloc'ed object). > How does the garbage collection work for the JNI objects To make sure we're clear: by "JNI objects" you mean "objects that have been malloc'ed from a JNI method", right? The GC doesn't know anything about them and, as a result, it doesn't manage them at all. It's up to the programmer to manage those. > and thus how does one tune a JVM when the majority of code is > (!groan!) C++ loaded into an application server via a war and > initiated via JNI? Carefully? :-) > There must be some best practices out there, or at least some > description of the differences between Java object memory management > and native object memory management. At least I hope there > is...otherwise I suppose I'll be treading water... Maybe the GC list is not the best place for JNI best practices? Tony > Michael Finocchiaro > michael.finocchiaro at gmail.com > Mobile Telephone: +33 6 85 46 07 62 > MSN: le_fino at hotmail.com > Blog: http://mfinocchiaro.wordpress.com > Bookmarks: http://del.icio.us/michael.finocchiaro > > > On Mon, Oct 6, 2008 at 4:43 PM, Tony Printezis > > wrote: > > Hi. > > > Michael Finocchiaro wrote: > > I was wondering if there were any thoughts, or better yet, > white papers, on the performance implications of JNI code on > HotSpot performance and configuration. The malloc()s in the > native JNI code are allocated on the Eden heap with other Java > objects and subject to the same rules? > > Absolutely not. The Java heap is mmaped and we use custom memory > management (GC!) and allocators for it; malloc uses a different > space for allocations. > > Tony > > Or not? Any boundary conditions to beware of? Google was not > particularly helpful so I am coming to the experts :) > Cheers, > Fino > > > > Michael Finocchiaro > michael.finocchiaro at gmail.com > > > > > Mobile Telephone: +33 6 85 46 07 62 > MSN: le_fino at hotmail.com > > > > Blog: http://mfinocchiaro.wordpress.com > Bookmarks: http://del.icio.us/michael.finocchiaro > ------------------------------------------------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | > | | MS BUR02-311 | > | e-mail: tony.printezis at sun.com > | 35 Network Drive | > | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > > -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From michael.finocchiaro at gmail.com Mon Oct 6 09:33:31 2008 From: michael.finocchiaro at gmail.com (Michael Finocchiaro) Date: Mon, 6 Oct 2008 18:33:31 +0200 Subject: JNI implications on HotSpot In-Reply-To: <48EA3B61.3060201@sun.com> References: <8b61e5430810060645t3c0c9fd2ub0de176dd3bae92d@mail.gmail.com> <48EA240B.6070001@sun.com> <8b61e5430810060853j50876020x3f4dd23d2dcc50@mail.gmail.com> <48EA3B61.3060201@sun.com> Message-ID: <8b61e5430810060933n4e784b03q1f434d24dd10c39c@mail.gmail.com> OK thanks. I stand corrected and withdraw further questions about JNI from this list. I appreciate all the timely and detailed responses - thanks everyone! Fino Michael Finocchiaro michael.finocchiaro at gmail.com Mobile Telephone: +33 6 85 46 07 62 MSN: le_fino at hotmail.com Blog: http://mfinocchiaro.wordpress.com Bookmarks: http://del.icio.us/michael.finocchiaro On Mon, Oct 6, 2008 at 6:22 PM, Tony Printezis wrote: > Hi, > > Michael Finocchiaro wrote: > >> So, is there a white paper that describes in detail the differences >> between memory management for Java objects and for native objects via JNI >> code? >> > I don't know if there is one. But the difference is very simple. Objects > allocated from Java and native malloc'ed objects are totally different, > reside on different memory areas, and are managed by different mechanisms. > Java objects are managed by the GC, malloc'ed objects need to be explicitly > freed (like any other malloc'ed object). > >> How does the garbage collection work for the JNI objects >> > To make sure we're clear: by "JNI objects" you mean "objects that have been > malloc'ed from a JNI method", right? The GC doesn't know anything about them > and, as a result, it doesn't manage them at all. It's up to the programmer > to manage those. > >> and thus how does one tune a JVM when the majority of code is (!groan!) >> C++ loaded into an application server via a war and initiated via JNI? >> > Carefully? :-) > >> There must be some best practices out there, or at least some description >> of the differences between Java object memory management and native object >> memory management. At least I hope there is...otherwise I suppose I'll be >> treading water... >> > Maybe the GC list is not the best place for JNI best practices? > > Tony > >> Michael Finocchiaro >> michael.finocchiaro at gmail.com >> Mobile Telephone: +33 6 85 46 07 62 >> MSN: le_fino at hotmail.com >> Blog: http://mfinocchiaro.wordpress.com >> Bookmarks: http://del.icio.us/michael.finocchiaro >> >> >> On Mon, Oct 6, 2008 at 4:43 PM, Tony Printezis < >> Antonios.Printezis at sun.com > wrote: >> >> Hi. >> >> >> Michael Finocchiaro wrote: >> >> I was wondering if there were any thoughts, or better yet, >> white papers, on the performance implications of JNI code on >> HotSpot performance and configuration. The malloc()s in the >> native JNI code are allocated on the Eden heap with other Java >> objects and subject to the same rules? >> >> Absolutely not. The Java heap is mmaped and we use custom memory >> management (GC!) and allocators for it; malloc uses a different >> space for allocations. >> >> Tony >> >> Or not? Any boundary conditions to beware of? Google was not >> particularly helpful so I am coming to the experts :) >> Cheers, >> Fino >> >> >> >> Michael Finocchiaro >> michael.finocchiaro at gmail.com >> >> > > >> >> Mobile Telephone: +33 6 85 46 07 62 >> MSN: le_fino at hotmail.com >> > >> >> Blog: http://mfinocchiaro.wordpress.com >> Bookmarks: http://del.icio.us/michael.finocchiaro >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> hotspot-gc-use mailing list >> >> hotspot-gc-use at openjdk.java.net >> >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> >> -- >> ---------------------------------------------------------------------- >> | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | >> | | MS BUR02-311 | >> | e-mail: tony.printezis at sun.com >> | 35 Network Drive | >> | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | >> ---------------------------------------------------------------------- >> e-mail client: Thunderbird (Solaris) >> >> >> >> > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | > | | MS BUR02-311 | > | e-mail: tony.printezis at sun.com | 35 Network Drive | > | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20081006/d4224c8b/attachment.html From aaisinzon at guidewire.com Mon Oct 6 17:53:26 2008 From: aaisinzon at guidewire.com (Alex Aisinzon) Date: Mon, 6 Oct 2008 17:53:26 -0700 Subject: Nursery and Tenured generation sizing Message-ID: <545E8962B7529546962672A564039F990F75B8D5@exchange.guidewire.com> All Our application is very memory intensive and behaves well with a lot of memory. 3GB is close to the 32 bits limit on our platform. In these tests, the amount of long lived is below 1GB. I running various performance tests and I am not seeing some definite advantage in using an as large as possible nursery even though this is one of the tenet of the generational garbage collector. By example, a performance run with 1GB tenured and 2GB nursery is not better performing than one with 2GB tenured and 1GB nursery even though, in both cases, there are very few full collections. I have been using ParNewGC and will try again with ParallelGC. This logic has worked very well on another JVM and I see no reason why it would not work equally well with the Sun JDK 1.5. Let me know what you think Alex Aisinzon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20081006/77ff89f9/attachment.html From Jon.Masamitsu at Sun.COM Mon Oct 6 20:35:21 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 06 Oct 2008 20:35:21 -0700 Subject: Nursery and Tenured generation sizing In-Reply-To: <545E8962B7529546962672A564039F990F75B8D5@exchange.guidewire.com> References: <545E8962B7529546962672A564039F990F75B8D5@exchange.guidewire.com> Message-ID: <48EAD8F9.8000707@sun.com> What's your command line? Alex Aisinzon wrote: > > All > > > > Our application is very memory intensive and behaves well with a lot > of memory. 3GB is close to the 32 bits limit on our platform. > > In these tests, the amount of long lived is below 1GB. > > I running various performance tests and I am not seeing some definite > advantage in using an as large as possible nursery even though this is > one of the tenet of the generational garbage collector. > > By example, a performance run with 1GB tenured and 2GB nursery is not > better performing than one with 2GB tenured and 1GB nursery even > though, in both cases, there are very few full collections. > > I have been using ParNewGC and will try again with ParallelGC. > > This logic has worked very well on another JVM and I see no reason why > it would not work equally well with the Sun JDK 1.5. > > > > Let me know what you think > > > > Alex Aisinzon > > ------------------------------------------------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > From rainer.jung at kippdata.de Sun Oct 12 03:58:42 2008 From: rainer.jung at kippdata.de (Rainer Jung) Date: Sun, 12 Oct 2008 12:58:42 +0200 Subject: More PrintGCApplicationStoppedTime messages in Java 6 In-Reply-To: <48EA23A0.4030600@sun.com> References: <48E8C118.9010302@kippdata.de> <48EA23A0.4030600@sun.com> Message-ID: <48F1D862.4030201@kippdata.de> Tony Printezis schrieb: > Rainer, > > Which version of 5 were you using? They could be biased locking > revocation safepoints. Try running with -XX:-UseBiasedLocking to see if > they occur. Biased locking is an important and effective optimization in > most cases and the extra safepoints that it causes are generally benign > and nothing to worry about. Great, that's it. When turning of, I don't get the additional stop times any more. I used Tomcat startup as a simple test case. I understand, that in general it's not a good idea to turn it off, just wanted to understand, where they come from. I used -XX:+TraceBiasedLocking with 1.6.0_07 to further look at those stopp events: Count Message type 54 Aligned thread 0xHEX to 0xHEX 265 Total time for which application threads were stopped: NUM seconds 265 Application time: NUM seconds 252 Revoking bias with potentially per-thread safepoint: 70 Revoking bias by walking my own stack: 1 Revoking bias with global safepoint: 323 Revoking bias of object ... 243 Revoked bias of currently-unlocked object 59 Revoked bias of currently-locked object 21 Revoked bias of object biased toward dead thread 4 * Beginning bulk revocation (kind == rebias) ... 3 * Beginning bulk revocation (kind == revoke) ... 7 * Ending bulk revocation 2 (Skipping revocation of object of type ... because it's no longer biased) 2 * Disabling biased locking for type ... 4 Rebiased object toward thread 0xHEX Of the 265 stop messages, 6 were triggered by Minor GC, the rest belongs to BiasedLocking (all very short). Thanks for the explanation! Regards, Rainer > Tony > > Rainer Jung wrote: >> Hi, >> >> the number of messages >> >> Total time for which application threads were stopped: X.Y seconds >> >> produced by PrintGCApplicationStoppedTime increased a lot between Java 5 >> and 6. All of the additional messages refer to extremely short pause >> times, and none of the prints anything when adding PrintHeadAtGC, so it >> looks like they are not directly related to actually running a GC task. >> >> Can someone give an indication, what other typical reasons for those >> pauses exist? I tried to track it back from vm/runtime/vmThread.cpp, but >> wasn't really successful. >> >> Regards, >> >> Rainer From david.tavoularis at mycom-int.com Mon Oct 13 06:37:13 2008 From: david.tavoularis at mycom-int.com (David Tavoularis) Date: Mon, 13 Oct 2008 15:37:13 +0200 Subject: Java6u7 : 2 very long Parallel GC (24&30min) without any specific reason Message-ID: Hi, We had 2 very long duration of the Full GC (Parallel GC) running with Java 6u7 (64-bit Solaris), where GC took 24min and then 30min. Usually, Full GC would hardly take more than 2 minutes. We also noticed that the "real" measurement usually comes close to the sum of the "user" and "sys" measurement, but here it outpasses the figures, see : user=47.85 sys=9.84, real=1450.70 secs Any idea what to check from here ? As a workaround, I would like to implement the property "-XX:MaxGCPauseMillis=60000" (10min max for a Full GC). Do you think it is a good idea ? Thanks in advance -- David [...] {Heap before GC invocations=1707 (full 39): PSYoungGen total 3757056K, used 1158717K [0xfffffffe7f000000, 0xffffffff78800000, 0xffffffff78800000) eden space 3436160K, 32% used [0xfffffffe7f000000,0xfffffffec238f468,0xffffffff50ba0000) from space 320896K, 17% used [0xffffffff50ba0000,0xffffffff543a0000,0xffffffff64500000) to space 308800K, 0% used [0xffffffff65a70000,0xffffffff65a70000,0xffffffff78800000) PSOldGen total 16396288K, used 4817606K [0xfffffffa96400000, 0xfffffffe7f000000, 0xfffffffe7f000000) object space 16396288K, 29% used [0xfffffffa96400000,0xfffffffbbc4b1b58,0xfffffffe7f000000) PSPermGen total 94208K, used 73744K [0xfffffffa76400000, 0xfffffffa7c000000, 0xfffffffa96400000) object space 94208K, 78% used [0xfffffffa76400000,0xfffffffa7ac04308,0xfffffffa7c000000) 37586.523: [Full GC (System) [PSYoungGen: 1158717K->0K(3757056K)] [PSOldGen: 4817606K->4825556K(16396288K)] 5976323K->4825556K(20153344K) [PSPermGen: 73744K->73744K(94208K)], 1450.6814969 secs] [Times: user=47.85 sys=9.84, real=1450.70 secs] Heap after GC invocations=1707 (full 39): PSYoungGen total 3757056K, used 0K [0xfffffffe7f000000, 0xffffffff78800000, 0xffffffff78800000) eden space 3436160K, 0% used [0xfffffffe7f000000,0xfffffffe7f000000,0xffffffff50ba0000) from space 320896K, 0% used [0xffffffff50ba0000,0xffffffff50ba0000,0xffffffff64500000) to space 308800K, 0% used [0xffffffff65a70000,0xffffffff65a70000,0xffffffff78800000) PSOldGen total 16396288K, used 4825556K [0xfffffffa96400000, 0xfffffffe7f000000, 0xfffffffe7f000000) object space 16396288K, 29% used [0xfffffffa96400000,0xfffffffbbcc75320,0xfffffffe7f000000) PSPermGen total 94208K, used 73744K [0xfffffffa76400000, 0xfffffffa7c000000, 0xfffffffa96400000) object space 94208K, 78% used [0xfffffffa76400000,0xfffffffa7ac04308,0xfffffffa7c000000) } Total time for which application threads were stopped: 1450.8533550 seconds [...] ------------- [...] Total time for which application threads were stopped: 0.0008307 seconds Total time for which application threads were stopped: 0.0402369 seconds {Heap before GC invocations=1731 (full 40): PSYoungGen total 2841920K, used 795145K [0xfffffffe7f000000, 0xffffffff78800000, 0xffffffff78800000) eden space 1626624K, 2% used [0xfffffffe7f000000,0xfffffffe816825c0,0xfffffffee2480000) from space 1215296K, 62% used [0xffffffff2e530000,0xffffffff5c730000,0xffffffff78800000) to space 1230592K, 0% used [0xfffffffee2480000,0xfffffffee2480000,0xffffffff2d640000) PSOldGen total 16396288K, used 9942183K [0xfffffffa96400000, 0xfffffffe7f000000, 0xfffffffe7f000000) object space 16396288K, 60% used [0xfffffffa96400000,0xfffffffcf5129db8,0xfffffffe7f000000) PSPermGen total 94208K, used 93469K [0xfffffffa76400000, 0xfffffffa7c000000, 0xfffffffa96400000) object space 94208K, 99% used [0xfffffffa76400000,0xfffffffa7bf47660,0xfffffffa7c000000) 42393.882: [Full GC (System) [PSYoungGen: 795145K->0K(2841920K)] [PSOldGen: 9942183K->9134638K(16396288K)] 10737328K->9134638K(19238208K) [PSPermGen: 93469K->70889K(98304K)], 1771.4421419 secs] [Times: user=120.88 sys=15.58, real=1771.45 secs] Heap after GC invocations=1731 (full 40): PSYoungGen total 2841920K, used 0K [0xfffffffe7f000000, 0xffffffff78800000, 0xffffffff78800000) eden space 1626624K, 0% used [0xfffffffe7f000000,0xfffffffe7f000000,0xfffffffee2480000) from space 1215296K, 0% used [0xffffffff2e530000,0xffffffff2e530000,0xffffffff78800000) to space 1230592K, 0% used [0xfffffffee2480000,0xfffffffee2480000,0xffffffff2d640000) PSOldGen total 16396288K, used 9134638K [0xfffffffa96400000, 0xfffffffe7f000000, 0xfffffffe7f000000) object space 16396288K, 55% used [0xfffffffa96400000,0xfffffffcc3c8b928,0xfffffffe7f000000) PSPermGen total 98304K, used 70889K [0xfffffffa76400000, 0xfffffffa7c400000, 0xfffffffa96400000) object space 98304K, 72% used [0xfffffffa76400000,0xfffffffa7a93a5e8,0xfffffffa7c400000) } Total time for which application threads were stopped: 1771.5992402 seconds Total time for which application threads were stopped: 0.0666503 seconds Total time for which application threads were stopped: 0.0978373 seconds [...] -------------- GC logs available at : http://dl.free.fr/qKoCs5q9F From Jon.Masamitsu at Sun.COM Mon Oct 13 08:46:16 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 13 Oct 2008 08:46:16 -0700 Subject: Java6u7 : 2 very long Parallel GC (24&30min) without any specific reason In-Reply-To: References: Message-ID: <48F36D48.3050907@Sun.COM> David, Are you setting the maximum heap size on your command line? If not, you may be getting a larger maximum by default. That value has been changed on 64 bit platforms. It used to be 1g (which was ok for 32 bit platforms but makes less sense for 64 bit platforms). Try -Xmx1g which will give you the old behavior. On 10/13/08 06:37, David Tavoularis wrote: > Hi, > > We had 2 very long duration of the Full GC (Parallel GC) running with Java 6u7 (64-bit Solaris), where GC took 24min and then 30min. > Usually, Full GC would hardly take more than 2 minutes. > > We also noticed that the "real" measurement usually comes close to the sum of the "user" and "sys" measurement, but here it outpasses the figures, see : > user=47.85 sys=9.84, real=1450.70 secs This says that the VM is doing lots of waiting. Do you have more than 1 VM running on this machine? Or other applications that are using lots of physical memory? The much larger real time often indicates swapping. > > Any idea what to check from here ? > > > As a workaround, I would like to implement the property "-XX:MaxGCPauseMillis=60000" (10min max for a Full GC). Do you think it is a good idea ? > Using this setting should limit the growth of the heap so that you stay below the 10min pause. You should be able to see the size of the heap in the GC logs (young gen + old (tenured) gen). > Thanks in advance > From david.tavoularis at mycom-int.com Mon Oct 13 11:31:28 2008 From: david.tavoularis at mycom-int.com (David Tavoularis) Date: Mon, 13 Oct 2008 20:31:28 +0200 Subject: Java6u7 : 2 very long Parallel GC (24&30min) without any specific reason In-Reply-To: <48F36D48.3050907@Sun.COM> References: <48F36D48.3050907@Sun.COM> Message-ID: Hi Jon, Yes, I set the maximum heap size on the command line. Please note that we are using ms=mx=20GB on a 64GB server. /xxx/jdk1.6.0_07/bin/sparcv9/java -server -Xms20000m -Xmx20000m -cp '' -Djava.ext.dirs=/opt/xxxxx/jar -Dsun.rmi.transport.tcp.handshakeTimeout=480000 -Dsun.rmi.dgc.client.gcInterval=900000 -Dsun.rmi.dgc.server.gcInterval=900000 -XX:NewSize=3990m -XX:MaxNewSize=3990m -XX:+UseParallelGC -XX:+AggressiveHeap -XX:MaxPermSize=512m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/opt/xxxxx/logs/gc_20081013111157.log com.my.class > This says that the VM is doing lots of waiting. Do you have > more than 1 VM running on this machine? Or other applications > that are using lots of physical memory? The much larger > real time often indicates swapping. There are only 2 other JVMs on the server (using 22GB & 13GB), so the total allocated memory is 20+22+13=55GB, there should be no swapping. But I will check if there are any defectuous RAM on the server. > Using this setting should limit the growth of the heap > so that you stay below the 10min pause. You should be > able to see the size of the heap in the GC logs > (young gen + old (tenured) gen). I do not understand, does it mean that ms and mx values from command line will not be taken into account ? Here ms=mx, so I do not expect any change in the heap size. We also defined on the command line new=4GB, so old=16GB (approximately), according to GC logs. -- David Le Mon, 13 Oct 2008 17:46:16 +0200, Jon Masamitsu a ?crit: > David, > > Are you setting the maximum heap size on your command line? > If not, you may be getting a larger maximum by default. That > value has been changed on 64 bit platforms. It used to be 1g > (which was ok for 32 bit platforms but makes less > sense for 64 bit platforms). Try -Xmx1g which will give you > the old behavior. > > On 10/13/08 06:37, David Tavoularis wrote: >> Hi, >> >> We had 2 very long duration of the Full GC (Parallel GC) running with Java 6u7 (64-bit Solaris), where GC took 24min and then 30min. >> Usually, Full GC would hardly take more than 2 minutes. >> >> We also noticed that the "real" measurement usually comes close to the sum of the "user" and "sys" measurement, but here it outpasses the figures, see : >> user=47.85 sys=9.84, real=1450.70 secs > > This says that the VM is doing lots of waiting. Do you have > more than 1 VM running on this machine? Or other applications > that are using lots of physical memory? The much larger > real time often indicates swapping. > >> >> Any idea what to check from here ? >> >> >> As a workaround, I would like to implement the property "-XX:MaxGCPauseMillis=60000" (10min max for a Full GC). Do you think it is a good idea ? >> > > Using this setting should limit the growth of the heap > so that you stay below the 10min pause. You should be > able to see the size of the heap in the GC logs > (young gen + old (tenured) gen). > > >> Thanks in advance >> > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > From Jon.Masamitsu at Sun.COM Mon Oct 13 11:39:17 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 13 Oct 2008 11:39:17 -0700 Subject: Java6u7 : 2 very long Parallel GC (24&30min) without any specific reason In-Reply-To: References: <48F36D48.3050907@Sun.COM> Message-ID: <48F395D5.8080802@Sun.COM> On 10/13/08 11:31, David Tavoularis wrote: ... >> Using this setting should limit the growth of the heap >> so that you stay below the 10min pause. You should be >> able to see the size of the heap in the GC logs >> (young gen + old (tenured) gen). > I do not understand, does it mean that ms and mx values from command line will not be taken into account ? > Here ms=mx, so I do not expect any change in the heap size. > We also defined on the command line new=4GB, so old=16GB (approximately), according to GC logs. > If you set ms=mx, then there should be no effect from setting pause time goal. The VM will try to grow/shrink the heap size within the bounds of ms and mx in order to meet the pause time goal. Since you have ms=mx, the VM does not have anything to adjust. From shane.cox at gmail.com Fri Oct 24 13:16:25 2008 From: shane.cox at gmail.com (Shane Cox) Date: Fri, 24 Oct 2008 16:16:25 -0400 Subject: does CMS collector ever compact? Message-ID: Periodically our application encounters promotion failures when running the CMS collector, presumably due to a fragmented Tenured space. Once the first failure occurs, we tend to see subsequent failures at lower occupancies of Tenured space. For example, the first promotion failure might occur when Tenured is 70% full, the next 68%, then 65%, ... you get the picture. So my question is whether a compaction will ever be performed to resolve the fragmentation? I'm not a programmer, but I see comments in concurrentMarkSweepGeneration.cpp that lead me to believe that a compact would happen if UseCMSCompactAtFullCollection is set to TRUE and the threshold set by CMSFullGCsBeforeCompaction has been exceeded. However, since the defaults are TRUE and 0 respectively, I would think that the first Full GC triggered by a promotion failure would perform a compact. Apparently I'm missing something. Commets from concurrentMarkSweepGeneration.cpp // Normally, we'll compact only if the UseCMSCompactAtFullCollection // flag is set, and we have either requested a System.gc() or // the number of full gc's since the last concurrent cycle // has exceeded the threshold set by CMSFullGCsBeforeCompaction, // or if an incremental collection has failed Any clarification in this matter would be appreciated. FYI, normally we are able to avoid promotion failures by setting CMSOccupancyFraction to an aggressive number such as 50, though this comes at the cost of a much larger heap and slower minor collections. Thanks, Shane -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20081024/d55eda57/attachment.html From Jon.Masamitsu at Sun.COM Fri Oct 24 13:43:55 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 24 Oct 2008 13:43:55 -0700 Subject: does CMS collector ever compact? In-Reply-To: References: Message-ID: <4902338B.3010602@sun.com> If a promotion failure occurs, the heap is compacted with the default settings for UseCMSCompactAtFullCollection and CMSFullGCsBeforeCompaction. What are all the command line flags that you use? Shane Cox wrote On 10/24/08 13:16,: > Periodically our application encounters promotion failures when > running the CMS collector, presumably due to a fragmented Tenured > space. Once the first failure occurs, we tend to see subsequent > failures at lower occupancies of Tenured space. For example, the > first promotion failure might occur when Tenured is 70% full, the next > 68%, then 65%, ... you get the picture. So my question is whether a > compaction will ever be performed to resolve the fragmentation? > > > I'm not a programmer, but I see comments in > concurrentMarkSweepGeneration.cpp that lead me to believe that a > compact would happen if UseCMSCompactAtFullCollection is set to TRUE > and the threshold set by CMSFullGCsBeforeCompaction has been > exceeded. However, since the defaults are TRUE and 0 respectively, I > would think that the first Full GC triggered by a promotion failure > would perform a compact. Apparently I'm missing something. > > Commets from concurrentMarkSweepGeneration.cpp > // Normally, we'll compact only if the UseCMSCompactAtFullCollection > // flag is set, and we have either requested a System.gc() or > // the number of full gc's since the last concurrent cycle > // has exceeded the threshold set by CMSFullGCsBeforeCompaction, > // or if an incremental collection has failed > > > Any clarification in this matter would be appreciated. > > FYI, normally we are able to avoid promotion failures by setting > CMSOccupancyFraction to an aggressive number such as 50, though this > comes at the cost of a much larger heap and slower minor collections. > > Thanks, > Shane > >------------------------------------------------------------------------ > >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > From Y.S.Ramakrishna at Sun.COM Fri Oct 24 14:11:00 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 24 Oct 2008 14:11:00 -0700 Subject: does CMS collector ever compact? In-Reply-To: References: Message-ID: Hi Shane -- > Periodically our application encounters promotion failures when > running the > CMS collector, presumably due to a fragmented Tenured space. Once the > first > failure occurs, we tend to see subsequent failures at lower > occupancies of > Tenured space. For example, the first promotion failure might occur when > Tenured is 70% full, the next 68%, then 65%, ... you get the picture. > So my > question is whether a compaction will ever be performed to resolve the > fragmentation? It is interesting that the onset of each subsequent promotion failure happens at a lower occupancy than the previous. I do not believe I have seen that behaviour before, and do not have any conjectures as to why that could be happening. Indeed I would normally have expected it to be larger (higher occupancy) after the first and then to remain stable at that value thereafter. Perhaps you can share some gc logs showing this, if possible? (email the logs offline to one of us, if so.) Sometimes, looking at entire gc logs as a whole fires the right cerebral neurons. We are actively working on improving some of our heuristics related to controlling fragmentation (under CR 6631166); stay tuned for some improvements in that area soon. -- ramki > > > I'm not a programmer, but I see comments in > concurrentMarkSweepGeneration.cpp that lead me to believe that a compact > would happen if UseCMSCompactAtFullCollection is set to TRUE and the > threshold set by CMSFullGCsBeforeCompaction has been exceeded. However, > since the defaults are TRUE and 0 respectively, I would think that the > first > Full GC triggered by a promotion failure would perform a compact. > Apparently I'm missing something. > > Commets from concurrentMarkSweepGeneration.cpp > // Normally, we'll compact only if the UseCMSCompactAtFullCollection > // flag is set, and we have either requested a System.gc() or > // the number of full gc's since the last concurrent cycle > // has exceeded the threshold set by CMSFullGCsBeforeCompaction, > // or if an incremental collection has failed > > > Any clarification in this matter would be appreciated. > > FYI, normally we are able to avoid promotion failures by setting > CMSOccupancyFraction to an aggressive number such as 50, though this comes > at the cost of a much larger heap and slower minor collections. > > Thanks, > Shane > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Antonios.Printezis at sun.com Mon Oct 27 08:59:02 2008 From: Antonios.Printezis at sun.com (Tony Printezis) Date: Mon, 27 Oct 2008 11:59:02 -0400 Subject: does CMS collector ever compact? In-Reply-To: References: Message-ID: <4905E546.5090800@sun.com> (this is a wild shot in the dark!) Does the application have an array-based data structure (ArrayList, HashMap, etc.) that keeps growing? Usually such data structures double the size of the associated array every time they need to grow it. The larger the array is, the earlier (maybe!) it will cause fragmentation. Tony Y Srinivas Ramakrishna wrote: > Hi Shane -- > > >> Periodically our application encounters promotion failures when >> running the >> CMS collector, presumably due to a fragmented Tenured space. Once the >> first >> failure occurs, we tend to see subsequent failures at lower >> occupancies of >> Tenured space. For example, the first promotion failure might occur when >> Tenured is 70% full, the next 68%, then 65%, ... you get the picture. >> So my >> question is whether a compaction will ever be performed to resolve the >> fragmentation? >> > > > > It is interesting that the onset of each subsequent promotion failure > happens at a lower occupancy than the previous. I do not believe I have > seen that behaviour before, and do not have any conjectures as to why > that could be happening. Indeed I would normally have expected it to > be larger (higher occupancy) after the first and then to remain stable > at that value thereafter. Perhaps you can share some gc logs showing this, > if possible? (email the logs offline to one of us, if so.) Sometimes, > looking at entire gc logs as a whole fires the right cerebral neurons. > > We are actively working on improving some of our heuristics > related to controlling fragmentation (under CR 6631166); stay tuned > for some improvements in that area soon. > > -- ramki > > >> I'm not a programmer, but I see comments in >> concurrentMarkSweepGeneration.cpp that lead me to believe that a compact >> would happen if UseCMSCompactAtFullCollection is set to TRUE and the >> threshold set by CMSFullGCsBeforeCompaction has been exceeded. However, >> since the defaults are TRUE and 0 respectively, I would think that the >> first >> Full GC triggered by a promotion failure would perform a compact. >> Apparently I'm missing something. >> >> Commets from concurrentMarkSweepGeneration.cpp >> // Normally, we'll compact only if the UseCMSCompactAtFullCollection >> // flag is set, and we have either requested a System.gc() or >> // the number of full gc's since the last concurrent cycle >> // has exceeded the threshold set by CMSFullGCsBeforeCompaction, >> // or if an incremental collection has failed >> >> >> Any clarification in this matter would be appreciated. >> >> FYI, normally we are able to avoid promotion failures by setting >> CMSOccupancyFraction to an aggressive number such as 50, though this comes >> at the cost of a much larger heap and slower minor collections. >> >> Thanks, >> Shane >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From Sujit.Das at cognizant.com Tue Oct 28 09:01:55 2008 From: Sujit.Das at cognizant.com (Sujit.Das at cognizant.com) Date: Tue, 28 Oct 2008 21:31:55 +0530 Subject: ParallelGCThreads Message-ID: <19B27FD5AF2EAA49A66F787911CF51952E782D@CTSINCHNSXUU.cts.com> We have a distributed trading application. I have a processor set of 16 CPU bound to process A and 8 CPU to process B. Please consider following scenarios: Scenario1: ParallelGCThreads set to 16 for both Process A and B. Scenario2: ParallelGCThreads set to 8 for both Process A and B. Now for scenario2, I see that threadstop time has increased (got worse) for process A and reduced (improved) for process B. Is this an expected behaviour? Also will this (# of ParallelGCThreads vs # of CPUs) impact rate of minor collections? Thanks and Regards, Sujit This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email or any action taken in reliance on this e-mail is strictly prohibited and may be unlawful. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20081028/5018000d/attachment.html From Y.S.Ramakrishna at Sun.COM Tue Oct 28 11:03:34 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Tue, 28 Oct 2008 11:03:34 -0700 Subject: ParallelGCThreads In-Reply-To: <19B27FD5AF2EAA49A66F787911CF51952E782D@CTSINCHNSXUU.cts.com> References: <19B27FD5AF2EAA49A66F787911CF51952E782D@CTSINCHNSXUU.cts.com> Message-ID: Hi Sujit -- > We have a distributed trading application. I have a processor set of > 16 CPU bound to process A and 8 CPU to process B. Please consider > following scenarios: > > Scenario1: ParallelGCThreads set to 16 for both Process A and B. 16 for Process B is definitely the wrong setting. You should use 8 or less for Process B, since it has only an 8 processor pset to run on. > > Scenario2: ParallelGCThreads set to 8 for both Process A and B. > > Now for scenario2, I see that threadstop time has increased (got > worse) for process A and reduced (improved) for process B. Is this an > expected behaviour? Also will this (# of ParallelGCThreads vs # of > CPUs) impact rate of minor collections? Yes, exactly as expected. In Scenario 2, you have more than 8 cpu's available for GC of A, so using more than 8 GC threads is paying off. If you exceed the # cpu's available, however, gc times will increase. That's what you saw in the case of B going from Scenario 2 to Scenario 1. #parallel gc threads does not affect the time between parallel collections, but only the duration of the collections. The time between gcs is affected (mainly) by the size of Eden and speed of the mutators. -- ramki > > Thanks and Regards, > Sujit > > > This e-mail and any files transmitted with it are for the sole use of > the intended recipient(s) and may contain confidential and privileged > information. > If you are not the intended recipient, please contact the sender by > reply e-mail and destroy all copies of the original message. > Any unauthorized review, use, disclosure, dissemination, forwarding, > printing or copying of this email or any action taken in reliance on > this e-mail is strictly prohibited and may be unlawful. > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Y.S.Ramakrishna at Sun.COM Tue Oct 28 11:15:58 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Tue, 28 Oct 2008 11:15:58 -0700 Subject: ParallelGCThreads In-Reply-To: References: <19B27FD5AF2EAA49A66F787911CF51952E782D@CTSINCHNSXUU.cts.com> Message-ID: By the way, i believe that the default #gc workers you would have gotten (with UseParallelGC) for process A would have been 16 and for process B eight, if you had launched them bound. (Re)Binding after launching does not readjust from the default, something we need to fix; so, in that case, you would need to explicitly set ParallelGCThreads. (With CMS, the numbers are similar, if linearly scaled down by a suitable factor.) -- ramki ----- Original Message ----- From: Y Srinivas Ramakrishna Date: Tuesday, October 28, 2008 11:06 am Subject: Re: ParallelGCThreads To: Sujit.Das at cognizant.com Cc: hotspot-gc-use at openjdk.java.net > Hi Sujit -- > > > We have a distributed trading application. I have a processor set of > > > 16 CPU bound to process A and 8 CPU to process B. Please consider > > following scenarios: > > > > Scenario1: ParallelGCThreads set to 16 for both Process A and B. > > 16 for Process B is definitely the wrong setting. You should use > 8 or less for Process B, since it has only an 8 processor pset to run > on. > > > > > Scenario2: ParallelGCThreads set to 8 for both Process A and B. > > > > Now for scenario2, I see that threadstop time has increased (got > > worse) for process A and reduced (improved) for process B. Is this > an > > expected behaviour? Also will this (# of ParallelGCThreads vs # of > > CPUs) impact rate of minor collections? > > Yes, exactly as expected. In Scenario 2, you have more than 8 cpu's > available for GC of A, so using more than 8 GC threads is paying off. > If you exceed the # cpu's available, however, gc times will increase. > That's what you saw in the case of B going from Scenario 2 to Scenario > 1. > > #parallel gc threads does not affect the time between parallel collections, > but only the duration of the collections. The time between gcs is affected > (mainly) by the size of Eden and speed of the mutators. > > -- ramki > > > > > Thanks and Regards, > > Sujit > > > > > > This e-mail and any files transmitted with it are for the sole use > of > > the intended recipient(s) and may contain confidential and > privileged > > information. > > If you are not the intended recipient, please contact the sender by > > > reply e-mail and destroy all copies of the original message. > > Any unauthorized review, use, disclosure, dissemination, forwarding, > > > printing or copying of this email or any action taken in reliance on > > > this e-mail is strictly prohibited and may be unlawful. > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use