From adamh at basis.com Mon Apr 6 11:07:20 2009 From: adamh at basis.com (Adam Hawthorne) Date: Mon, 6 Apr 2009 14:07:20 -0400 Subject: CMS changes from 6u4 to 6u11 ? Message-ID: <200904061407.20880.adamh@basis.com> I have a customer with whom I worked a long time between 6u1 and 6u4, when Sun fixed the bug about very long pause times on Linux due to stopping all the application threads. That fix resolved the issues he saw where he would see the load climb up to ~80-100 and then everything would begin running again. The customer recently made four significant system changes: 1. New version of our product 2. Java upgrade from 6u4 to 6u11 3. 32-bit -> 64-bit machine 4. VMWare virtualization The previous machine was a 4-way Intel on 32-bit RedHat Advance Server 4/Linux 2.6.9. The new machine is an 8-way Xeon E4540 on Suse 10 SP2 64-bit/2.6.16. The VMWare instance allocates 4 processors and 8GB of RAM. Since he moved to 64-bit, I suggested he increase his -Xmx and -Xms by about 30%. This is his old Java commandline: -Xmx2048m -Xms900m -XX\:NewRatio\=4 -XX\:MaxNewSize\=200m -XX\:+ExplicitGCInvokesConcurrent -XX\:+UseConcMarkSweepGC -XX\:+UseParNewGC -XX\:+CMSParallelRemarkEnabled -XX\:CMSInitiatingOccupancyFraction\=50 -XX\:+PrintGCDetails -XX\:+PrintGCTimeStamps -server -verbose\:gc -Xloggc\:logs/gc.txt -XX\:CompileCommandFile\=cfg/.hotspot_compiler -Dnetworkaddress.cache.ttl\=10 -Dsun.net.inetaddr.ttl\=10 -Djava.awt.headless\=true -XX\:+PrintGCApplicationStoppedTime I believe this upgrade occurred on Friday, 4/3. This morning, 4/6, he complained of sluggishness and high load again. I looked at his logs, and in the first hour, I saw a 29-second remark. He's rolled back to the previous version of our product to eliminate that variable as a source of confusion, but such a long remark was suspicious to me. I can send a log to anyone who wants to look. I may have another log from this most recent restart as well, as he's just informed me it's beginning to show some similar behavior. Thanks, Adam -- Adam Hawthorne Software Engineer BASIS International Ltd. www.basis.com +1.505.345.5232 Phone -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part. Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090406/5f85391f/attachment.bin From Y.S.Ramakrishna at Sun.COM Mon Apr 6 11:27:24 2009 From: Y.S.Ramakrishna at Sun.COM (Y.S.Ramakrishna at Sun.COM) Date: Mon, 06 Apr 2009 11:27:24 -0700 Subject: CMS changes from 6u4 to 6u11 ? In-Reply-To: <200904061407.20880.adamh@basis.com> References: <200904061407.20880.adamh@basis.com> Message-ID: <49DA498C.1030304@Sun.COM> If you run a 32-bit JVM on a 64-bit OS you do not need to increase the heap size, since the native pointers used by your JVM are still just 32-bit. (I am assuming from yr email below that you are still running a 32-bit JVM process on your 64-bit server.) In any case, that should not cause any such issues, nor I believe should the upgrade from 6u4 to 6u11. Two questions to investigate: (1) the effect of virtualization: (a) are there other guests sharing those 4 processors with this guest? (b) do you see sluggishness in general or only when the concurrent collector runs? (2) could you send the logs from both cases? (In the interests of not cluttering up everyone's mailboxes, send them to me directly rather than to the list, unless others have also expressed an interest.) As always, of course, remember that although there are Sun employees on this list, this is a community list and not an official Sun support channel, even when Sun employees respond to your questions. For official Sun support channels, please turn to sun.com/services -- ramki On 04/06/09 11:07, Adam Hawthorne wrote: > I have a customer with whom I worked a long time between 6u1 and 6u4, when Sun fixed the bug about very long pause times on Linux due to stopping all the application threads. That fix resolved the issues he saw where he would see the load climb up to ~80-100 and then everything would begin running again. > > The customer recently made four significant system changes: > > 1. New version of our product > 2. Java upgrade from 6u4 to 6u11 > 3. 32-bit -> 64-bit machine > 4. VMWare virtualization > > The previous machine was a 4-way Intel on 32-bit RedHat Advance Server 4/Linux 2.6.9. The new machine is an 8-way Xeon E4540 on Suse 10 SP2 64-bit/2.6.16. The VMWare instance allocates 4 processors and 8GB of RAM. Since he moved to 64-bit, I suggested he increase his -Xmx and -Xms by about 30%. > > This is his old Java commandline: > > -Xmx2048m -Xms900m -XX\:NewRatio\=4 -XX\:MaxNewSize\=200m -XX\:+ExplicitGCInvokesConcurrent -XX\:+UseConcMarkSweepGC -XX\:+UseParNewGC -XX\:+CMSParallelRemarkEnabled -XX\:CMSInitiatingOccupancyFraction\=50 -XX\:+PrintGCDetails -XX\:+PrintGCTimeStamps -server -verbose\:gc -Xloggc\:logs/gc.txt -XX\:CompileCommandFile\=cfg/.hotspot_compiler -Dnetworkaddress.cache.ttl\=10 -Dsun.net.inetaddr.ttl\=10 -Djava.awt.headless\=true -XX\:+PrintGCApplicationStoppedTime > > I believe this upgrade occurred on Friday, 4/3. This morning, 4/6, he complained of sluggishness and high load again. I looked at his logs, and in the first hour, I saw a 29-second remark. > > He's rolled back to the previous version of our product to eliminate that variable as a source of confusion, but such a long remark was suspicious to me. I can send a log to anyone who wants to look. I may have another log from this most recent restart as well, as he's just informed me it's beginning to show some similar behavior. > > Thanks, > > Adam > > > > ------------------------------------------------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From adamh at basis.com Mon Apr 6 13:45:08 2009 From: adamh at basis.com (Adam Hawthorne) Date: Mon, 6 Apr 2009 16:45:08 -0400 Subject: CMS changes from 6u4 to 6u11 ? In-Reply-To: <49DA498C.1030304@Sun.COM> References: <200904061407.20880.adamh@basis.com> <49DA498C.1030304@Sun.COM> Message-ID: <200904061645.14068.adamh@basis.com> Hi Ramki, On Mon April 6 2009, Y.S.Ramakrishna at sun.com wrote: > If you run a 32-bit JVM on a 64-bit OS you do not need to increase the heap > size, since the native pointers used by your JVM are still just 32-bit. > (I am assuming from yr email below that you are still running a 32-bit JVM > process on your 64-bit server.) Originally, he did switch to using a 64-bit JVM simultaneously (which is why I suggested the 30% increase). > In any case, that should not cause any such issues, nor I believe should the upgrade > from 6u4 to 6u11. > > Two questions to investigate: > (1) the effect of virtualization: > (a) are there other guests sharing those 4 processors with this guest? The customer asserts this is the only guest currently in use. > (b) do you see sluggishness in general or only when the concurrent collector > runs? Customer says the sluggishness increases as the number of users increases, but at normal usage throughout the day, his average heap occupancy is > 50%. Since we have CMSInitiatingOccupancyFraction=50, CMS is virtually always running. > (2) could you send the logs from both cases? (In the interests of not > cluttering up everyone's mailboxes, send them to me directly > rather than to the list, unless others have also expressed an interest.) I appreciate your interest and I'll send them immediately after this. > As always, of course, remember that although there are Sun employees > on this list, this is a community list and not an official Sun support channel, > even when Sun employees respond to your questions. For official Sun support channels, > please turn to sun.com/services My apologies. I'm mostly concerned about the long remark, which I'd never seen before with that class of hardware config. I was also curious if anyone else who's using CMS has seen any issues switching either from 32-bit to 64-bit Java, or from 6u4 to 6u11. I'm especially interested in how much memory requirements changed and whether fragmentation increased at all, but if anyone noticed any other artifacts that I should look out for, I'd appreciate any information you might have. Thanks, Adam > -- ramki > > On 04/06/09 11:07, Adam Hawthorne wrote: > > I have a customer with whom I worked a long time between 6u1 and 6u4, when Sun fixed the bug about very long pause times on Linux due to stopping all the application threads. That fix resolved the issues he saw where he would see the load climb up to ~80-100 and then everything would begin running again. > > > > The customer recently made four significant system changes: > > > > 1. New version of our product > > 2. Java upgrade from 6u4 to 6u11 > > 3. 32-bit -> 64-bit machine > > 4. VMWare virtualization > > > > The previous machine was a 4-way Intel on 32-bit RedHat Advance Server 4/Linux 2.6.9. The new machine is an 8-way Xeon E4540 on Suse 10 SP2 64-bit/2.6.16. The VMWare instance allocates 4 processors and 8GB of RAM. Since he moved to 64-bit, I suggested he increase his -Xmx and -Xms by about 30%. > > > > This is his old Java commandline: > > > > -Xmx2048m -Xms900m -XX\:NewRatio\=4 -XX\:MaxNewSize\=200m -XX\:+ExplicitGCInvokesConcurrent -XX\:+UseConcMarkSweepGC -XX\:+UseParNewGC -XX\:+CMSParallelRemarkEnabled -XX\:CMSInitiatingOccupancyFraction\=50 -XX\:+PrintGCDetails -XX\:+PrintGCTimeStamps -server -verbose\:gc -Xloggc\:logs/gc.txt -XX\:CompileCommandFile\=cfg/.hotspot_compiler -Dnetworkaddress.cache.ttl\=10 -Dsun.net.inetaddr.ttl\=10 -Djava.awt.headless\=true -XX\:+PrintGCApplicationStoppedTime > > > > I believe this upgrade occurred on Friday, 4/3. This morning, 4/6, he complained of sluggishness and high load again. I looked at his logs, and in the first hour, I saw a 29-second remark. > > > > He's rolled back to the previous version of our product to eliminate that variable as a source of confusion, but such a long remark was suspicious to me. I can send a log to anyone who wants to look. I may have another log from this most recent restart as well, as he's just informed me it's beginning to show some similar behavior. > > > > Thanks, > > > > Adam > > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -- Adam Hawthorne Software Engineer BASIS International Ltd. www.basis.com +1.505.345.5232 Phone -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part. Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090406/07b6c6bb/attachment.bin From aaisinzon at guidewire.com Mon Apr 6 16:49:30 2009 From: aaisinzon at guidewire.com (Alex Aisinzon) Date: Mon, 6 Apr 2009 16:49:30 -0700 Subject: Tracking size of the object that caused the collection Message-ID: Hi all We have historically seen performance degradation when large Java Objects were allocated. We came to this conclusion after reviewing some logs provided with another JVM that trace the size of the object that triggered the collection. Is there a flag that can be set to allow tracing the size of the object whose allocation triggered the garbage collection? Our current target would be Sun JDK 1.5. Thanks in advance Alex Aisinzon -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090406/fec03ead/attachment.html From Y.S.Ramakrishna at Sun.COM Mon Apr 6 17:04:50 2009 From: Y.S.Ramakrishna at Sun.COM (Y.S.Ramakrishna at Sun.COM) Date: Mon, 06 Apr 2009 17:04:50 -0700 Subject: Tracking size of the object that caused the collection In-Reply-To: References: Message-ID: <49DA98A2.90307@Sun.COM> Unfortunately, there isn't such a flag at the moment, although it would be very little work to add one. You can surmise that a GC is occuring for a large object allocation request based on how much of Eden is filled when the GC occurs, and whether the old generation growth was abnormally large during that collection. But these are of course approximate. -- ramki On 04/06/09 16:49, Alex Aisinzon wrote: > Hi all > > > > We have historically seen performance degradation when large Java > Objects were allocated. > > We came to this conclusion after reviewing some logs provided with > another JVM that trace the size of the object that triggered the collection. > > Is there a flag that can be set to allow tracing the size of the object > whose allocation triggered the garbage collection? > > Our current target would be Sun JDK 1.5. > > > > Thanks in advance > > > > Alex Aisinzon > > > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Peter.Kessler at Sun.COM Mon Apr 6 17:43:06 2009 From: Peter.Kessler at Sun.COM (Peter B. Kessler) Date: Mon, 06 Apr 2009 17:43:06 -0700 Subject: Tracking size of the object that caused the collection In-Reply-To: References: Message-ID: <49DAA19A.5060905@Sun.COM> What Ramki said. You can use -XX:+PrintHeapAtGC to see the remaining number of bytes in the eden space before each collection. Part of Ramki's approximation is that the eden is allocated in thread-local allocation buffers, so the eden may look like it is full (of TLABs) when an allocation from a TLAB fails. You can see the distribution of the sizes of the objects in the heap at any given time with "jmap -histo". That will show you the number of instances of each class and the total number of bytes occupied by those instances. For objects with fixed sizes (e.g., not arrays), you can use those numbers to figure out the size of an instance of each class. Then you can figure out your allocation distribution, and the probability that an allocation of any given class will cause a collection. It could also be that your objects are so large that they don't fit in a TLAB (though TLAB's grow and shrink as needed, but only within limits), in which case you'll be doing slow-path allocation for your large objects. That grabs a lock, but it's not a highly-contended lock, and it shouldn't be noticeably slower to allocate one large object than many small objects that occupy the same amount of memory. Large objects take longer to initialize than small objects, but again, not longer than initializing many small objects in the same amount of memory. It could be that your objects are large enough to be allocated directly in the old generation. If those objects are short-lived, then you will be causing more old generation collections, which are not designed for short-lived objects and so will have more overhead than if the objects were allocated and collected from the young generation. But you didn't actually say what problem you are trying to solve. This sounds like a strange lamppost to be looking under. Collections are triggered (usually) by generations being too full to satisfy an allocation request. Then the cost of the collection should be thought of as amortized over all the allocations that filled the generation. Of course larger objects have a greater chance of being the ones that don't fit when a generation is getting full. But looking at just the object that pushes you over the edge is a biased sampling. A large object that causes a collection is no more "at fault" than all the little objects that filled up the generation to the point where there's no room for the large object. If your allocation pattern were different, such that the large object were allocated first and the little objects were allocated later, then you might never see the large object as "causing" a collection. (This temporal argument is dubious, since allocation is a continuous process: you would be hard-pressed to order your allocatio ns such that your large objects were allocated just after a collection. But I hope you get the idea.) ... peter Alex Aisinzon wrote: > Hi all > > > > We have historically seen performance degradation when large Java > Objects were allocated. > > We came to this conclusion after reviewing some logs provided with > another JVM that trace the size of the object that triggered the collection. > > Is there a flag that can be set to allow tracing the size of the object > whose allocation triggered the garbage collection? > > Our current target would be Sun JDK 1.5. > > > > Thanks in advance > > > > Alex Aisinzon > > > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From vasu_t_s at hotmail.com Mon Apr 20 16:50:03 2009 From: vasu_t_s at hotmail.com (vasu ts) Date: Mon, 20 Apr 2009 23:50:03 +0000 Subject: frequent CMS collections/ CPU spike/ Hotspot JRE1.4.2_17/ Message-ID: Hi all, We have an application which is deployed on IBM websphere 5.1/Solaris 5.9/Sun hotspot JRE1.4.2_17/. We have 4 JVM's which are running on the same machine. These JVM's recieve xml messages from MQ queue which are processed ( business logic stores the data from xml into database) and xml replies are sent back to the MQ queue. Hardware 8 dual core - sparc IV 4 single core - sparc III JRE options set on JVM's -server -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xmx768m -Xms768m -XX:MaxNewSize=500m -XX:NewSize=500m -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=128 During our stress test we are seeing that CMS collector is trying to start old gen collection very frequently and the cpu usage spikes upto 99% -100% usage. Our stress test included adding a user per second until we reach 2500 user limit and then we maintain steady user rate of 2500. Attached are the gc logs from one of the JVM, the PrintGCStats details and GCTimeline graph from GCHisto tool Is there anything I should set so that CMS collector doesn't start so frequently. Also, don't know if increasing the total heap size (to 1GB) will improve this situation. Please provide your comments. thanks vasu.. _________________________________________________________________ Windows Live?: Life without walls. http://windowslive.com/explore?ocid=TXT_TAGLM_WL_allup_1b_explore_042009 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090420/5ba4c791/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: nativelogs.zip Type: application/octet-stream Size: 222760 bytes Desc: not available Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090420/5ba4c791/attachment.obj From Jon.Masamitsu at Sun.COM Mon Apr 20 19:46:08 2009 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 20 Apr 2009 19:46:08 -0700 Subject: frequent CMS collections/ CPU spike/ Hotspot JRE1.4.2_17/ In-Reply-To: References: Message-ID: <49ED3370.2000701@sun.com> With a total heap of 768m and a young gen pf 500m you might not have enough room in the tenured (old) gen. That is, the amount of free space left in the tenured gen may be so small that CMS thinks it need to start a collection immediately. Try a larger total heap or a smaller yount gen. I see that you're turning on -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=128 so everything that survives a young gen is being promoted to the tenured gen. Those values reduce the CMS pause times but also fill up the tenured generation faster. If you don't have a specific reason to use those values, 15 for the tenuring threshold and 6 or 8 for the survivor ratio may serve you better. vasu ts wrote On 04/20/09 16:50,: > Hi all, > > We have an application which is deployed on IBM websphere 5.1/Solaris > 5.9/Sun hotspot JRE1.4.2_17/. We have 4 JVM's which are running on the > same machine. > These JVM's recieve xml messages from MQ queue which are processed ( > business logic stores the data from xml into database) and xml replies > are sent back to the MQ queue. > > Hardware > 8 dual core - sparc IV > 4 single core - sparc III > > JRE options set on JVM's > > -server -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xmx768m -Xms768m > -XX:MaxNewSize=500m -XX:NewSize=500m -XX:+UseParNewGC > -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC > -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=128 > > During our stress test we are seeing that CMS collector is trying to > start old gen collection very frequently and the cpu usage spikes upto > 99% -100% usage. > Our stress test included adding a user per second until we reach 2500 > user limit and then we maintain steady user rate of 2500. > > Attached are the gc logs from one of the JVM, the PrintGCStats details > and GCTimeline graph from GCHisto tool > > Is there anything I should set so that CMS collector doesn't start so > frequently. Also, don't know if increasing the total heap size (to > 1GB) will improve this situation. > > Please provide your comments. > > thanks > vasu.. > > > ------------------------------------------------------------------------ > Windows Live?: Life without walls. Check it out. > > > >------------------------------------------------------------------------ > >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > From vasu_t_s at hotmail.com Mon Apr 20 20:18:18 2009 From: vasu_t_s at hotmail.com (vasu ts) Date: Tue, 21 Apr 2009 03:18:18 +0000 Subject: frequent CMS collections/ CPU spike/ Hotspot JRE1.4.2_17/ In-Reply-To: <49ED3370.2000701@sun.com> References: <49ED3370.2000701@sun.com> Message-ID: Thanks for the reply. In our application, we generate lots of objects (xml marshalling and unmarshalling) which are never re-used. So we thought increasing the young gen would give enough time for the objects to die young. Also, in the native logs seems like all the young gen is being filled up and is being collected. Does this mean the young gen should be allocated more space? ( this will mean the total heap size might have to increase also). 9168.037: [GC 9168.038: [ParNew: 504044K->0K(508096K), 0.1534303 secs] 669773K->196481K(782528K), 0.1547104 secs] 9168.356: [CMS-concurrent-sweep: 2.269/2.562 secs] 9168.356: [CMS-concurrent-reset-start] 9168.766: [CMS-concurrent-reset: 0.409/0.409 secs] 9170.798: [GC [1 CMS-initial-mark: 193751K(274432K)] 466502K(782528K), 2.8434545 secs] 9173.644: [CMS-concurrent-mark-start] 9176.626: [GC 9176.627: [ParNew: 504111K->0K(508096K), 0.1026683 secs] 697862K->214832K(782528K), 0.1035612 secs] 9177.017: [CMS-concurrent-mark: 3.133/3.373 secs] 9177.017: [CMS-concurrent-preclean-start] 9177.377: [CMS-concurrent-preclean: 0.319/0.359 secs] 9177.383: [GC9177.385: [Rescan (parallel) , 0.2564013 secs]9177.641: [weak refs processing, 0.0331580 secs] [1 CMS-remark: 214832K(274432K)] 282063K(782528K), 0.2928263 secs] 9177.679: [CMS-concurrent-sweep-start] 9178.515: [CMS-concurrent-sweep: 0.835/0.836 secs] 9178.516: [CMS-concurrent-reset-start] 9178.623: [CMS-concurrent-reset: 0.107/0.107 secs] Will try out your suggestions -XX:MaxTenuringThreshold=15 -XX:SurvivorRatio=6 or 8. vasu.. > Date: Mon, 20 Apr 2009 19:46:08 -0700 > From: Jon.Masamitsu at Sun.COM > Subject: Re: frequent CMS collections/ CPU spike/ Hotspot JRE1.4.2_17/ > To: vasu_t_s at hotmail.com > CC: hotspot-gc-use at openjdk.java.net > > With a total heap of 768m and a young gen pf 500m you > might not have enough room in the tenured (old) gen. > That is, the amount of free space left in the tenured gen > may be so small that CMS thinks it need to start > a collection immediately. Try a larger total heap > or a smaller yount gen. I see that you're turning on > > -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=128 > > so everything that survives a young gen is being > promoted to the tenured gen. Those values reduce > the CMS pause times but also fill up the tenured > generation faster. If you don't have a specific reason > to use those values, 15 for the tenuring threshold and > 6 or 8 for the survivor ratio may serve you better. > > > > > vasu ts wrote On 04/20/09 16:50,: > > > Hi all, > > > > We have an application which is deployed on IBM websphere 5.1/Solaris > > 5.9/Sun hotspot JRE1.4.2_17/. We have 4 JVM's which are running on the > > same machine. > > These JVM's recieve xml messages from MQ queue which are processed ( > > business logic stores the data from xml into database) and xml replies > > are sent back to the MQ queue. > > > > Hardware > > 8 dual core - sparc IV > > 4 single core - sparc III > > > > JRE options set on JVM's > > > > -server -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xmx768m -Xms768m > > -XX:MaxNewSize=500m -XX:NewSize=500m -XX:+UseParNewGC > > -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC > > -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=128 > > > > During our stress test we are seeing that CMS collector is trying to > > start old gen collection very frequently and the cpu usage spikes upto > > 99% -100% usage. > > Our stress test included adding a user per second until we reach 2500 > > user limit and then we maintain steady user rate of 2500. > > > > Attached are the gc logs from one of the JVM, the PrintGCStats details > > and GCTimeline graph from GCHisto tool > > > > Is there anything I should set so that CMS collector doesn't start so > > frequently. Also, don't know if increasing the total heap size (to > > 1GB) will improve this situation. > > > > Please provide your comments. > > > > thanks > > vasu.. > > > > > > ------------------------------------------------------------------------ > > Windows Live?: Life without walls. Check it out. > > > > > > > >------------------------------------------------------------------------ > > > >_______________________________________________ > >hotspot-gc-use mailing list > >hotspot-gc-use at openjdk.java.net > >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > _________________________________________________________________ Rediscover Hotmail?: Get quick friend updates right in your inbox. http://windowslive.com/RediscoverHotmail?ocid=TXT_TAGLM_WL_HM_Rediscover_Updates2_042009 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090421/9ca181c1/attachment.html From Jon.Masamitsu at Sun.COM Mon Apr 20 20:49:29 2009 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 20 Apr 2009 20:49:29 -0700 Subject: frequent CMS collections/ CPU spike/ Hotspot JRE1.4.2_17/ In-Reply-To: References: <49ED3370.2000701@sun.com> Message-ID: <49ED4249.2090809@sun.com> vasu ts wrote On 04/20/09 20:18,: > Thanks for the reply. > > In our application, we generate lots of objects (xml marshalling and > unmarshalling) which are never re-used. So we thought increasing the > young gen would give enough time for the objects to die young. > What you say is likely true but when the young gen gets collected there has to be space in the tenured gen for anything that survives the young gen collection so you need a tenured gen that also meets the needs of your application. > Also, in the native logs seems like all the young gen is being filled > up and is being collected. Does this mean the young gen should be > allocated more space? ( this will mean the total heap size might have > to increase also). No, the young gen does not necessarily need more space. The way that the GC works is that allocations are done until a generation is full and then a collection is done. Unless you stop doing allocations, you're always going to have the young gen fill up and have to do GC's. > > 9168.037: [GC 9168.038: [ParNew: 504044K->0K(508096K), 0.1534303 secs] > 669773K->196481K(782528K), 0.1547104 secs] > 9168.356: [CMS-concurrent-sweep: 2.269/2.562 secs] > 9168.356: [CMS-concurrent-reset-start] > 9168.766: [CMS-concurrent-reset: 0.409/0.409 secs] > 9170.798: [GC [1 CMS-initial-mark: 193751K(274432K)] 466502K(782528K), > 2.8434545 secs] > 9173.644: [CMS-concurrent-mark-start] > 9176.626: [GC 9176.627: [ParNew: 504111K->0K(508096K), 0.1026683 secs] > 697862K->214832K(782528K), 0.1035612 secs] > 9177.017: [CMS-concurrent-mark: 3.133/3.373 secs] > 9177.017: [CMS-concurrent-preclean-start] > 9177.377: [CMS-concurrent-preclean: 0.319/0.359 secs] > 9177.383: [GC9177.385: [Rescan (parallel) , 0.2564013 secs]9177.641: > [weak refs processing, 0.0331580 secs] [1 CMS-remark: > 214832K(274432K)] 282063K(782528K), 0.2928263 secs] > 9177.679: [CMS-concurrent-sweep-start] > 9178.515: [CMS-concurrent-sweep: 0.835/0.836 secs] > 9178.516: [CMS-concurrent-reset-start] > 9178.623: [CMS-concurrent-reset: 0.107/0.107 secs] > > Will try out your suggestions -XX:MaxTenuringThreshold=15 > -XX:SurvivorRatio=6 or 8. > > vasu.. > > > Date: Mon, 20 Apr 2009 19:46:08 -0700 > > From: Jon.Masamitsu at Sun.COM > > Subject: Re: frequent CMS collections/ CPU spike/ Hotspot JRE1.4.2_17/ > > To: vasu_t_s at hotmail.com > > CC: hotspot-gc-use at openjdk.java.net > > > > With a total heap of 768m and a young gen pf 500m you > > might not have enough room in the tenured (old) gen. > > That is, the amount of free space left in the tenured gen > > may be so small that CMS thinks it need to start > > a collection immediately. Try a larger total heap > > or a smaller yount gen. I see that you're turning on > > > > -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=128 > > > > so everything that survives a young gen is being > > promoted to the tenured gen. Those values reduce > > the CMS pause times but also fill up the tenured > > generation faster. If you don't have a specific reason > > to use those values, 15 for the tenuring threshold and > > 6 or 8 for the survivor ratio may serve you better. > > > > > > > > > > vasu ts wrote On 04/20/09 16:50,: > > > > > Hi all, > > > > > > We have an application which is deployed on IBM websphere 5.1/Solaris > > > 5.9/Sun hotspot JRE1.4.2_17/. We have 4 JVM's which are running on the > > > same machine. > > > These JVM's recieve xml messages from MQ queue which are processed ( > > > business logic stores the data from xml into database) and xml replies > > > are sent back to the MQ queue. > > > > > > Hardware > > > 8 dual core - sparc IV > > > 4 single core - sparc III > > > > > > JRE options set on JVM's > > > > > > -server -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xmx768m -Xms768m > > > -XX:MaxNewSize=500m -XX:NewSize=500m -XX:+UseParNewGC > > > -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC > > > -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=128 > > > > > > During our stress test we are seeing that CMS collector is trying to > > > start old gen collection very frequently and the cpu usage spikes upto > > > 99% -100% usage. > > > Our stress test included adding a user per second until we reach 2500 > > > user limit and then we maintain steady user rate of 2500. > > > > > > Attached are the gc logs from one of the JVM, the PrintGCStats details > > > and GCTimeline graph from GCHisto tool > > > > > > Is there anything I should set so that CMS collector doesn't start so > > > frequently. Also, don't know if increasing the total heap size (to > > > 1GB) will improve this situation. > > > > > > Please provide your comments. > > > > > > thanks > > > vasu.. > > > > > > > > > > ------------------------------------------------------------------------ > > > Windows Live?: Life without walls. Check it out. > > > > > > > > > > > > > >------------------------------------------------------------------------ > > > > > >_______________________________________________ > > >hotspot-gc-use mailing list > > >hotspot-gc-use at openjdk.java.net > > >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > > > > > > > ------------------------------------------------------------------------ > Rediscover Hotmail?: Get quick friend updates right in your inbox. > Check it out. > From charles.nutter at Sun.COM Fri Apr 24 01:48:48 2009 From: charles.nutter at Sun.COM (Charles Oliver Nutter) Date: Fri, 24 Apr 2009 10:48:48 +0200 Subject: Troublesome reflection-cached SoftReferences Message-ID: <49F17CF0.5070007@sun.com> I've run into a case in JRuby where a reflected method is keeping alive a class, and that class references a large graph of JRuby objects. The reflected method is off an anonymous interface implementation we create at runtime. In order for that implementation to construct additional Ruby objects, it has to reference an instance of our org.jruby.Ruby class, which in turn references all globally-scoped data, and basically keeps a lot of stuff alive. The SoftReference appears to be part of the root set, and holds only an array of Constructor objects. Given a bit of time, this reference is cleared and the graph goes with it. But doing repeated redeploys of a JRuby application can fill up the heap before those soft references get a chance to clear. So my questions: * Is there any way to force the internal reflection caches to flush themselves? It's very inconvenient that the cache is keeping alive a class that should be dead. * Is there any way to force an early SoftReference cleanup, before their time has expired? * Does anyone else find this caching behavior irritating? - Charlie From Y.S.Ramakrishna at Sun.COM Fri Apr 24 02:02:01 2009 From: Y.S.Ramakrishna at Sun.COM (Y. Srinivas Ramakrishna) Date: Fri, 24 Apr 2009 02:02:01 -0700 Subject: Troublesome reflection-cached SoftReferences In-Reply-To: <49F17CF0.5070007@sun.com> References: <49F17CF0.5070007@sun.com> Message-ID: <49F18009.4010807@sun.com> Charles Oliver Nutter wrote: > I've run into a case in JRuby where a reflected method is keeping alive > a class, and that class references a large graph of JRuby objects. The > reflected method is off an anonymous interface implementation we create > at runtime. In order for that implementation to construct additional > Ruby objects, it has to reference an instance of our org.jruby.Ruby > class, which in turn references all globally-scoped data, and basically > keeps a lot of stuff alive. > > The SoftReference appears to be part of the root set, and holds only an > array of Constructor objects. Given a bit of time, this reference is > cleared and the graph goes with it. But doing repeated redeploys of a > JRuby application can fill up the heap before those soft references get > a chance to clear. > > So my questions: > > * Is there any way to force the internal reflection caches to flush > themselves? It's very inconvenient that the cache is keeping alive a > class that should be dead. > > * Is there any way to force an early SoftReference cleanup, before their > time has expired? > You can try -XX:SoftRefLRUPolicyMSPerMB=0 , which is kind of a sledgehammer to clear soft refs the first time GC sees them (making them behave, in essence, like weak references) > * Does anyone else find this caching behavior irritating? > The policy is to clear a soft ref if it has not been accessed "recently". The "recency threshold metric" itself is computed as SoftRefLRUPolicyMSPerMB * FreeMemory following last major collection, so that as heap pressure increases we should be clearing soft refs more agressively albeit still in an LRU fashion. Could it be the case that your soft ref is accessed very frequently, so never falls below the computed recency threshold? In any event, the JVM spec guarantees that all soft refs that are not strongly reachable will be cleared before an OOM is issued., I trust your issue is one of performance badness on account of a large heap, not an OOM? thanks. -- ramki > - Charlie > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > From charles.nutter at sun.com Fri Apr 24 02:44:21 2009 From: charles.nutter at sun.com (Charles Oliver Nutter) Date: Fri, 24 Apr 2009 11:44:21 +0200 Subject: Troublesome reflection-cached SoftReferences In-Reply-To: <49F18009.4010807@sun.com> References: <49F17CF0.5070007@sun.com> <49F18009.4010807@sun.com> Message-ID: <49F189F5.80606@sun.com> Y. Srinivas Ramakrishna wrote: > You can try -XX:SoftRefLRUPolicyMSPerMB=0 , which is kind of a sledgehammer > to clear soft refs the first time GC sees them (making them behave, in > essence, like weak references) Yes, definitely a sledgehammer. I knew about this flag, but since we're dealing with JRuby deploying to GlassFish it would be unwise for us to start monkeying with softref LRU policy. > Could it be the case that your soft ref is accessed very frequently, so > never falls below the computed > recency threshold? In this case, no...I do see it eventually disappear, but it takes some time. We're talking about a JRuby applications deployed into GlassFish, so it's probably around 100MB of memory. I believe the default for SoftRefLRUPolicyMSPerMB is 1000ms, so we'd be talking about 100 seconds before the heap collects. However, if another undeploy/redeploy happens in those 100 seconds and we go up to, say, 150MB, does that mean all soft references would have to be unused for 150 seconds? And then if another undeploy/redeploy happens? I think this is the case we're running into with "agile" developers doing more rapid redeployments. > In any event, the JVM spec guarantees that all soft refs that are not > strongly reachable > will be cleared before an OOM is issued., I trust your issue is one of > performance badness > on account of a large heap, not an OOM? Actually it is an OOM problem. The problem is that on repeated redeploys, the memory continues to grow because these SoftReferences are holding on to a large graph of data. Some of this data holds also to class references, and so rather than getting a general heap OOM we get a PermGen OOM. Should a PermGen OOM force softly-reachable objects to be collected as well? - Charlie From Y.S.Ramakrishna at Sun.COM Fri Apr 24 03:04:41 2009 From: Y.S.Ramakrishna at Sun.COM (Y. Srinivas Ramakrishna) Date: Fri, 24 Apr 2009 03:04:41 -0700 Subject: Troublesome reflection-cached SoftReferences In-Reply-To: <49F189F5.80606@sun.com> References: <49F17CF0.5070007@sun.com> <49F18009.4010807@sun.com> <49F189F5.80606@sun.com> Message-ID: <49F18EB9.7060309@sun.com> Charles Oliver Nutter wrote: >> In any event, the JVM spec guarantees that all soft refs that are not >> strongly reachable >> will be cleared before an OOM is issued., I trust your issue is one >> of performance badness >> on account of a large heap, not an OOM? > > Actually it is an OOM problem. If it is an OOM problem we can ignore discussion of how to more aggressively clear the soft ref, since that is in some sense moot. > > The problem is that on repeated redeploys, the memory continues to > grow because these SoftReferences are holding on to a large graph of > data. Some of this data holds also to class references, and so rather > than getting a general heap OOM we get a PermGen OOM. Should a PermGen > OOM force softly-reachable objects to be collected as well? > Yes, each and every softly reachable object should be collected before any kind of OOM (whether in perm or in regular heap) is thrown. Have you made sure your soft ref's referent is not strongly reachable? BTW, -XX:+PrintReferenceGC will give some stats on Reference object processing. If you run a debug JVM then -XX:+TraceReferenceGC will give (altogether too much) tracing info as References are processes by GC, -- ramki From Jon.Masamitsu at Sun.COM Fri Apr 24 06:31:20 2009 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 24 Apr 2009 06:31:20 -0700 Subject: Troublesome reflection-cached SoftReferences In-Reply-To: <49F18EB9.7060309@sun.com> References: <49F17CF0.5070007@sun.com> <49F18009.4010807@sun.com> <49F189F5.80606@sun.com> <49F18EB9.7060309@sun.com> Message-ID: <49F1BF28.9060809@sun.com> Y. Srinivas Ramakrishna wrote On 04/24/09 03:04,: >Charles Oliver Nutter wrote: > > >>>In any event, the JVM spec guarantees that all soft refs that are not >>>strongly reachable >>>will be cleared before an OOM is issued., I trust your issue is one >>>of performance badness >>>on account of a large heap, not an OOM? >>> >>> >>Actually it is an OOM problem. >> >> > >If it is an OOM problem we can ignore discussion of how to more >aggressively clear the soft ref, since that is >in some sense moot. > > Which garbage collector and which jdk release are you using? From charles.nutter at sun.com Fri Apr 24 07:02:43 2009 From: charles.nutter at sun.com (Charles Oliver Nutter) Date: Fri, 24 Apr 2009 16:02:43 +0200 Subject: Troublesome reflection-cached SoftReferences In-Reply-To: <49F1BF28.9060809@sun.com> References: <49F17CF0.5070007@sun.com> <49F18009.4010807@sun.com> <49F189F5.80606@sun.com> <49F18EB9.7060309@sun.com> <49F1BF28.9060809@sun.com> Message-ID: <49F1C683.8000204@sun.com> Jon Masamitsu wrote: > Which garbage collector and which jdk release are you using? Java HotSpot(TM) 64-Bit Server VM version 1.6.0_07-b06-57 Garbage collector: Name = 'Copy' Garbage collector: Name = 'MarkSweepCompact' These are the defaults running on Mac. Yes, this is the Apple JDK, but we've reproduces these issues under Sun/OpenJDK as well. I'm really just trying to figure out *why* all this stuff is sticking around, and the soft reference has been my first solid lead. - Charlie From charles.nutter at sun.com Fri Apr 24 08:38:20 2009 From: charles.nutter at sun.com (Charles Oliver Nutter) Date: Fri, 24 Apr 2009 17:38:20 +0200 Subject: Troublesome reflection-cached SoftReferences In-Reply-To: <49F17CF0.5070007@sun.com> References: <49F17CF0.5070007@sun.com> Message-ID: <49F1DCEC.90601@sun.com> Charles Oliver Nutter wrote: > The SoftReference appears to be part of the root set, and holds only an > array of Constructor objects. Given a bit of time, this reference is > cleared and the graph goes with it. But doing repeated redeploys of a > JRuby application can fill up the heap before those soft references get > a chance to clear. Here's a screenshot of the SoftReference and its contents, leading up to a JRuby "RubyClass" instance which causes things to stay alive. This is using a HeapAnalysis tool I found on IBM alphaworks: http://www.alphaworks.ibm.com/tech/heapanalyzer Using this tool against a jmap dump, I look at the root set and see a SoftReference holding a Constructor[]. This appears to also happen for several SoftReferences for other types of reflected information like Methods and Fields. Walking down the heap I see these all referencing JRuby classes, but I never see the array they contain get collected. I'm more than happy to put a heap dump somewhere and try anything. At this point I just don't know how to fix this reflection issue (if this is actually the issue) or how to find out what's actually keeping all my objects alive. - Charlie -------------- next part -------------- A non-text attachment was scrubbed... Name: Picture 2.png Type: image/png Size: 68967 bytes Desc: not available Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090424/1c2b28c4/attachment.png From Peter.Kessler at Sun.COM Fri Apr 24 08:42:49 2009 From: Peter.Kessler at Sun.COM (Peter B. Kessler) Date: Fri, 24 Apr 2009 08:42:49 -0700 Subject: Troublesome reflection-cached SoftReferences In-Reply-To: <49F17CF0.5070007@sun.com> References: <49F17CF0.5070007@sun.com> Message-ID: <49F1DDF9.9080506@Sun.COM> Charles Oliver Nutter wrote: > I've run into a case in JRuby where a reflected method is keeping alive > a class, and that class references a large graph of JRuby objects. The > reflected method is off an anonymous interface implementation we create > at runtime. In order for that implementation to construct additional > Ruby objects, it has to reference an instance of our org.jruby.Ruby > class, which in turn references all globally-scoped data, and basically > keeps a lot of stuff alive. > > The SoftReference appears to be part of the root set, and holds only an > array of Constructor objects. Given a bit of time, this reference is > cleared and the graph goes with it. But doing repeated redeploys of a > JRuby application can fill up the heap before those soft references get > a chance to clear. > > So my questions: > > * Is there any way to force the internal reflection caches to flush > themselves? It's very inconvenient that the cache is keeping alive a > class that should be dead. > > * Is there any way to force an early SoftReference cleanup, before their > time has expired? If you can get to the SoftReference objects, you can call java.lang.ref.Reference.clear() on them. That way they won't hold on to their referent until the SoftReference policy clears them. I'm with Ramki though: all the SoftReferences should be cleared by the collector before it throws OOME at you. ... peter > * Does anyone else find this caching behavior irritating? > > - Charlie > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From charles.nutter at sun.com Fri Apr 24 09:04:03 2009 From: charles.nutter at sun.com (Charles Oliver Nutter) Date: Fri, 24 Apr 2009 18:04:03 +0200 Subject: Troublesome reflection-cached SoftReferences In-Reply-To: <49F1DDF9.9080506@Sun.COM> References: <49F17CF0.5070007@sun.com> <49F1DDF9.9080506@Sun.COM> Message-ID: <49F1E2F3.5040808@sun.com> Peter B. Kessler wrote: > If you can get to the SoftReference objects, you can call > java.lang.ref.Reference.clear() on them. That way they won't hold on to > their referent until the SoftReference policy clears them. > > I'm with Ramki though: all the SoftReferences should be cleared by the > collector before it throws OOME at you. Yeah, these seem to be some SoftReference out of my control, holding on to arrays of Method and Constructor and Field. I assume it's part of some caching for java.lang.reflect, but since they show up as being in the root set there's no parent to trace back to... And yeah, I don't understand why these wouldn't be getting cleaned up either. The object immediately under the SoftReference, e.g. the array of Constructor[], does not appear to have any other parents in the heap dump. I can't figure out why it doesn't go away. - Charlie From Jon.Masamitsu at Sun.COM Fri Apr 24 11:50:25 2009 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 24 Apr 2009 11:50:25 -0700 Subject: Troublesome reflection-cached SoftReferences In-Reply-To: <49F1C683.8000204@sun.com> References: <49F17CF0.5070007@sun.com> <49F18009.4010807@sun.com> <49F189F5.80606@sun.com> <49F18EB9.7060309@sun.com> <49F1BF28.9060809@sun.com> <49F1C683.8000204@sun.com> Message-ID: <49F209F1.7020505@sun.com> Charles Oliver Nutter wrote On 04/24/09 07:02,: >Jon Masamitsu wrote: > > >>Which garbage collector and which jdk release are you using? >> >> > >Java HotSpot(TM) 64-Bit Server VM version 1.6.0_07-b06-57 > >Garbage collector: Name = 'Copy' >Garbage collector: Name = 'MarkSweepCompact' > > I'm don't know how Apple has set it's defaults and the names above are not specific enough. Can you run the command java -XX:+PrintGCDetails -XX:+PrintCommandLineFlags -version You should get output such as java -XX:+PrintGCDetails -XX:+PrintCommandLineFlags -version -XX:MaxHeapSize=1073741824 -XX:+PrintCommandLineFlags -XX:+PrintGCDetails -XX:+UseParallelGC java version "1.6.0" Java(TM) SE Runtime Environment (build 1.6.0-b105) Java HotSpot(TM) Server VM (build 1.6.0-b105, mixed mode) Heap PSYoungGen total 12544K, used 215K [0xf4000000, 0xf4e00000, 0xfb200000) eden space 10752K, 2% used [0xf4000000,0xf4035c38,0xf4a80000) from space 1792K, 0% used [0xf4c40000,0xf4c40000,0xf4e00000) to space 1792K, 0% used [0xf4a80000,0xf4a80000,0xf4c40000) PSOldGen total 110592K, used 0K [0xbb000000, 0xc1c00000, 0xf4000000) object space 110592K, 0% used [0xbb000000,0xbb000000,0xc1c00000) PSPermGen total 16384K, used 1449K [0xb7000000, 0xb8000000, 0xbb000000) object space 16384K, 8% used [0xb7000000,0xb716a5b8,0xb8000000) This will tell us exactly what is being run. >These are the defaults running on Mac. Yes, this is the Apple JDK, but >we've reproduces these issues under Sun/OpenJDK as well. I'm really just >trying to figure out *why* all this stuff is sticking around, and the >soft reference has been my first solid lead. > >- Charlie >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > From vasu_t_s at hotmail.com Sat Apr 25 06:21:52 2009 From: vasu_t_s at hotmail.com (vasu ts) Date: Sat, 25 Apr 2009 13:21:52 +0000 Subject: concurrent mode failure in minor gc/ -XX:CMSFullGCsBeforeCompaction=1 In-Reply-To: <49ED4249.2090809@sun.com> References: <49ED3370.2000701@sun.com> <49ED4249.2090809@sun.com> Message-ID: Hi all, We have an application which is deployed on IBM websphere 5.1/Solaris 5.9/Sun hotspot JRE1.4.2_17/. We have 4 JVM's which are running on the same machine. These JVM's recieve xml messages from MQ queue which are processed ( business logic stores the data from xml into database) and xml replies are sent back to the MQ queue. In our application, we generate lots of objects (xml marshalling and unmarshalling) which are never re-used.Our goal is to sustain the application for 24 hrs with a steady load of 2500 users. Hardware 8 dual core - sparc IV 4 single core - sparc III 1) Options used : -server -Xmx768m -Xms768m -XX:MaxNewSize=256m -XX:NewSize=256m -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=60 -XX:+UseCMSInitiatingOccupancyOnly We did the stress test with -XX:CMSInitiatingOccupancyFraction=60 flag and what we saw was that initially, CMS collector was adhering to the setting and was starting a full GC, when old gen is 60% full. But when we reach the steady load of 2500 users seems like CMS collector cannot keep up with allocation rate and thus it cannot do a Full GC, when it is 60% full. See attached excel which shows how much old gen % was filled up. Also, I have attached the native logs (native_stdout_option_1.log) from this test. Once the old gen is 94% full, there is a concurrent mode failure which results in a compaction which takes 21.2 secs. Please note we saw CPU spikes (80-99% usage) when the old gen was filling up. We did not continue the test further because we thought we will eventually end up with the several compactions over a period of time. 2) Options used: -server -Xmx768m -Xms768m -XX:MaxNewSize=256m -XX:NewSize=256m -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=60 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSFullGCsBeforeCompaction=1 To reduce the compactions we added -XX:CMSFullGCsBeforeCompaction=1 and tested our application again. We saw that there were several "concurrent mode failure" when minor collections were happening. Any ideas?, why this is happening so early into the test (23 secs into the test). See below snapshot from the native logs. Also, I see that the survivor ratio is only being set to 192K , Is this the default size with CMS collector?. Next week, we will see if increasing the heap size and tweaking the survivor ratio helps the application. thanks vasu.. 23.144: [GC {Heap before GC invocations=0: Heap par new generation total 261952K, used 261754K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 99% used [0xc4800000, 0xd479eb68, 0xd47a0000) from space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) to space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) concurrent mark-sweep generation total 524288K, used 0K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 28544K, used 28479K [0xf4800000, 0xf63e0000, 0xf8800000) 23.147: [ParNew: 261754K->0K(261952K), 0.1828999 secs] 261754K->9956K(786240K) Heap after GC invocations=1: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 9956K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 28544K, used 28479K [0xf4800000, 0xf63e0000, 0xf8800000) } , 0.1843301 secs] 34.492: [GC {Heap before GC invocations=1: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 9956K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 35200K, used 35085K [0xf4800000, 0xf6a60000, 0xf8800000) 34.492: [ParNew: 261760K->261760K(261952K), 0.0001138 secs]34.493: [CMS (concurrent mode failure)[Unloading class com.ibm.ws.Transaction.JTA.XARecUtil] : 9956K->17291K(524288K), 2.1463156 secs] 271716K->17291K(786240K) Heap after GC invocations=2: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 17291K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 35200K, used 35023K [0xf4800000, 0xf6a60000, 0xf8800000) } , 2.1476960 secs] 495.514: [GC {Heap before GC invocations=2: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 17291K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 36096K, used 35999K [0xf4800000, 0xf6b40000, 0xf8800000) 495.515: [ParNew: 261760K->261760K(261952K), 0.0001167 secs]495.515: [CMS (concurrent mode failure): 17291K->20296K(524288K), 1.7551403 secs] 279051K->20296K(786240K) Heap after GC invocations=3: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 20296K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 36096K, used 35983K [0xf4800000, 0xf6b40000, 0xf8800000) } , 1.7565744 secs] 660.887: [GC {Heap before GC invocations=3: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 20296K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 36992K, used 36930K [0xf4800000, 0xf6c20000, 0xf8800000) 660.888: [ParNew: 261760K->261760K(261952K), 0.0001172 secs]660.888: [CMS (concurrent mode failure): 20296K->23930K(524288K), 1.9326939 secs] 282056K->23930K(786240K) Heap after GC invocations=4: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 23930K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 36992K, used 36908K [0xf4800000, 0xf6c20000, 0xf8800000) } , 1.9341771 secs] 755.905: [GC {Heap before GC invocations=4: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 23930K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38528K, used 38413K [0xf4800000, 0xf6da0000, 0xf8800000) 755.906: [ParNew: 261760K->261760K(261952K), 0.0001159 secs]755.906: [CMS (concurrent mode failure): 23930K->23518K(524288K), 1.9600006 secs] 285690K->23518K(786240K) Heap after GC invocations=5: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 23518K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38528K, used 38377K [0xf4800000, 0xf6da0000, 0xf8800000) } , 1.9614396 secs] 832.698: [GC {Heap before GC invocations=5: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 23518K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38633K [0xf4800000, 0xf6de0000, 0xf8800000) 832.699: [ParNew: 261760K->261760K(261952K), 0.0001141 secs]832.699: [CMS (concurrent mode failure)[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor1] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor22] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor21] : 23518K->22650K(524288K), 2.0814234 secs] 285278K->22650K(786240K) Heap after GC invocations=6: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 22650K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38563K [0xf4800000, 0xf6de0000, 0xf8800000) } , 2.0828510 secs] 900.246: [GC {Heap before GC invocations=6: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 22650K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38664K [0xf4800000, 0xf6de0000, 0xf8800000) 900.247: [ParNew: 261760K->261760K(261952K), 0.0001151 secs]900.247: [CMS (concurrent mode failure)[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor15] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor40] [Unloading class sun.reflect.GeneratedMethodAccessor2] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor36] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor20] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor34] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor31] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor29] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor16] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor13] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor39] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor27] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor32] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor67] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor10] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor35] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor17] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor11] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor26] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor28] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor18] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor38] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor33] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor25] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor6] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor7] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor37] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor9] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor8] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor4] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor42] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor30] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor3] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor14] : 22650K->21627K(524288K), 2.0554604 secs] 284410K->21627K(786240K) Heap after GC invocations=7: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21627K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38530K [0xf4800000, 0xf6de0000, 0xf8800000) } , 2.0569150 secs] 973.776: [GC {Heap before GC invocations=7: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21627K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38606K [0xf4800000, 0xf6de0000, 0xf8800000) 973.777: [ParNew: 261760K->261760K(261952K), 0.0001188 secs]973.777: [CMS (concurrent mode failure): 21627K->21663K(524288K), 1.9085358 secs] 283387K->21663K(786240K) Heap after GC invocations=8: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21663K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38573K [0xf4800000, 0xf6de0000, 0xf8800000) } , 1.9100083 secs] 1044.718: [GC {Heap before GC invocations=8: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21663K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38647K [0xf4800000, 0xf6de0000, 0xf8800000) 1044.719: [ParNew: 261760K->261760K(261952K), 0.0001185 secs]1044.719: [CMS (concurrent mode failure): 21663K->21546K(524288K), 1.8343919 secs] 283423K->21546K(786240K) Heap after GC invocations=9: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21546K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38615K [0xf4800000, 0xf6de0000, 0xf8800000) } , 1.8358148 secs] 1115.795: [GC {Heap before GC invocations=9: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21546K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38701K [0xf4800000, 0xf6de0000, 0xf8800000) 1115.795: [ParNew: 261760K->261760K(261952K), 0.0001158 secs]1115.796: [CMS (concurrent mode failure): 21546K->20650K(524288K), 1.8198966 secs] 283306K->20650K(786240K) Heap after GC invocations=10: primadm at condo102# head -200 native_stdout.log |more 23.144: [GC {Heap before GC invocations=0: Heap par new generation total 261952K, used 261754K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 99% used [0xc4800000, 0xd479eb68, 0xd47a0000) from space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) to space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) concurrent mark-sweep generation total 524288K, used 0K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 28544K, used 28479K [0xf4800000, 0xf63e0000, 0xf8800000) 23.147: [ParNew: 261754K->0K(261952K), 0.1828999 secs] 261754K->9956K(786240K) Heap after GC invoca tions=1: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 9956K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 28544K, used 28479K [0xf4800000, 0xf63e0000, 0xf8800000) } , 0.1843301 secs] 34.492: [GC {Heap before GC invocations=1: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 9956K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 35200K, used 35085K [0xf4800000, 0xf6a60000, 0xf8800000) 34.492: [ParNew: 261760K->261760K(261952K), 0.0001138 secs]34.493: [CMS (concurrent mode failure)[U nloading class com.ibm.ws.Transaction.JTA.XARecUtil] : 9956K->17291K(524288K), 2.1463156 secs] 271716K->17291K(786240K) Heap after GC invocations=2: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 17291K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 35200K, used 35023K [0xf4800000, 0xf6a60000, 0xf8800000) } , 2.1476960 secs] 495.514: [GC {Heap before GC invocations=2: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 17291K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 36096K, used 35999K [0xf4800000, 0xf6b40000, 0xf8800000) 495.515: [ParNew: 261760K->261760K(261952K), 0.0001167 secs]495.515: [CMS (concurrent mode failure) : 17291K->20296K(524288K), 1.7551403 secs] 279051K->20296K(786240K) Heap after GC invocations=3: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 20296K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 36096K, used 35983K [0xf4800000, 0xf6b40000, 0xf8800000) } , 1.7565744 secs] 660.887: [GC {Heap before GC invocations=3: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 20296K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 36992K, used 36930K [0xf4800000, 0xf6c20000, 0xf8800000) 660.888: [ParNew: 261760K->261760K(261952K), 0.0001172 secs]660.888: [CMS (concurrent mode failure) : 20296K->23930K(524288K), 1.9326939 secs] 282056K->23930K(786240K) Heap after GC invocations=4: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 23930K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 36992K, used 36908K [0xf4800000, 0xf6c20000, 0xf8800000) } , 1.9341771 secs] 755.905: [GC {Heap before GC invocations=4: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 23930K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38528K, used 38413K [0xf4800000, 0xf6da0000, 0xf8800000) 755.906: [ParNew: 261760K->261760K(261952K), 0.0001159 secs]755.906: [CMS (concurrent mode failure) : 23930K->23518K(524288K), 1.9600006 secs] 285690K->23518K(786240K) Heap after GC invocations=5: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 23518K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38528K, used 38377K [0xf4800000, 0xf6da0000, 0xf8800000) } , 1.9614396 secs] 832.698: [GC {Heap before GC invocations=5: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 23518K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38633K [0xf4800000, 0xf6de0000, 0xf8800000) 832.699: [ParNew: 261760K->261760K(261952K), 0.0001141 secs]832.699: [CMS (concurrent mode failure) [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor1] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor22] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor21] : 23518K->22650K(524288K), 2.0814234 secs] 285278K->22650K(786240K) Heap after GC invocations=6: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 22650K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38563K [0xf4800000, 0xf6de0000, 0xf8800000) } , 2.0828510 secs] 900.246: [GC {Heap before GC invocations=6: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 22650K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38664K [0xf4800000, 0xf6de0000, 0xf8800000) 900.247: [ParNew: 261760K->261760K(261952K), 0.0001151 secs]900.247: [CMS (concurrent mode failure) [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor15] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor40] [Unloading class sun.reflect.GeneratedMethodAccessor2] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor36] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor20] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor34] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor31] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor29] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor16] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor13] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor39] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor27] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor32] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor67] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor10] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor35] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor17] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor11] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor26] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor28] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor18] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor38] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor33] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor25] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor6] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor7] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor37] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor9] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor8] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor4] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor42] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor30] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor3] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor14] : 22650K->21627K(524288K), 2.0554604 secs] 284410K->21627K(786240K) Heap after GC invocations=7: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21627K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38530K [0xf4800000, 0xf6de0000, 0xf8800000) } , 2.0569150 secs] 973.776: [GC {Heap before GC invocations=7: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21627K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38606K [0xf4800000, 0xf6de0000, 0xf8800000) 973.777: [ParNew: 261760K->261760K(261952K), 0.0001188 secs]973.777: [CMS (concurrent mode failure) : 21627K->21663K(524288K), 1.9085358 secs] 283387K->21663K(786240K) Heap after GC invocations=8: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21663K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38573K [0xf4800000, 0xf6de0000, 0xf8800000) } , 1.9100083 secs] 1044.718: [GC {Heap before GC invocations=8: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21663K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38647K [0xf4800000, 0xf6de0000, 0xf8800000) 1044.719: [ParNew: 261760K->261760K(261952K), 0.0001185 secs]1044.719: [CMS (concurrent mode failur e): 21663K->21546K(524288K), 1.8343919 secs] 283423K->21546K(786240K) Heap after GC invocations=9: Heap par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21546K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38615K [0xf4800000, 0xf6de0000, 0xf8800000) } , 1.8358148 secs] 1115.795: [GC {Heap before GC invocations=9: Heap par new generation total 261952K, used 261760K [0xc4800000, 0xd4800000, 0xd4800000) eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) concurrent mark-sweep generation total 524288K, used 21546K [0xd4800000, 0xf4800000, 0xf4800000) concurrent-mark-sweep perm gen total 38784K, used 38701K [0xf4800000, 0xf6de0000, 0xf8800000) 1115.795: [ParNew: 261760K->261760K(261952K), 0.0001158 secs]1115.796: [CMS (concurrent mode failur e): 21546K->20650K(524288K), 1.8198966 secs] 283306K->20650K(786240K) Heap after GC invocations=10: _________________________________________________________________ Windows Live? SkyDrive?: Get 25 GB of free online storage. http://windowslive.com/online/skydrive?ocid=TXT_TAGLM_WL_skydrive_042009 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090425/1103810a/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: gcanalysis_option1.xls Type: application/vnd.ms-excel Size: 192000 bytes Desc: not available Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090425/1103810a/attachment.xls From Jon.Masamitsu at Sun.COM Sun Apr 26 22:47:25 2009 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Sun, 26 Apr 2009 22:47:25 -0700 Subject: concurrent mode failure in minor gc/ -XX:CMSFullGCsBeforeCompaction=1 In-Reply-To: References: <49ED3370.2000701@sun.com> <49ED4249.2090809@sun.com> Message-ID: <49F546ED.3030700@sun.com> The behavior of CMS with regard to concurrent mode failures has changed quite a bit since 1.4.2. Since this is a mailing list for the open source jdk (currently jdk7), you might get better 1.4.2 info from your Sun support. vasu ts wrote On 04/25/09 06:21,: > Hi all, > > We have an application which is deployed on IBM websphere 5.1/Solaris > 5.9/Sun hotspot JRE1.4.2_17/. We have 4 JVM's which are running on the > same machine. These JVM's recieve xml messages from MQ queue which are > processed ( business logic stores the data from xml into database) and > xml replies are sent back to the MQ queue. In our application, we > generate lots of objects (xml marshalling and unmarshalling) which are > never re-used.Our goal is to sustain the application for 24 hrs with a > steady load of 2500 users. > > Hardware > 8 dual core - sparc IV > 4 single core - sparc III > > > 1) > > Options used : > -server -Xmx768m -Xms768m -XX:MaxNewSize=256m -XX:NewSize=256m > -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC > -XX:CMSInitiatingOccupancyFraction=60 -XX:+UseCMSInitiatingOccupancyOnly > > We did the stress test with -XX:CMSInitiatingOccupancyFraction=60 flag > and what we saw was that initially, CMS collector was adhering to the > setting and was starting a full GC, when old gen is 60% full. But when > we reach the steady load of 2500 users seems like CMS collector cannot > keep up with allocation rate and thus it cannot do a Full GC, when it > is 60% full. See attached excel which shows how much old gen % was > filled up. Also, I have attached the native logs > (native_stdout_option_1.log) from this test. > > Once the old gen is 94% full, there is a concurrent mode failure which > results in a compaction which takes 21.2 secs. Please note we saw CPU > spikes (80-99% usage) when the old gen was filling up. We did not > continue the test further because we thought we will eventually end up > with the several compactions over a period of time. > > 2) > > Options used: > -server -Xmx768m -Xms768m -XX:MaxNewSize=256m -XX:NewSize=256m > -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC > -XX:CMSInitiatingOccupancyFraction=60 > -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSFullGCsBeforeCompaction=1 > > To reduce the compactions we added -XX:CMSFullGCsBeforeCompaction=1 > and tested our application again. We saw that there were several > "concurrent mode failure" when minor collections were happening. Any > ideas?, why this is happening so early into the test (23 secs into the > test). See below snapshot from the native logs. Also, I see that the > survivor ratio is only being set to 192K , Is this the default size > with CMS collector?. Next week, we will see if increasing the heap > size and tweaking the survivor ratio helps the application. > > thanks > vasu.. > > > 23.144: [GC {Heap before GC invocations=0: > Heap > par new generation total 261952K, used 261754K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 99% used [0xc4800000, 0xd479eb68, 0xd47a0000) > from space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > to space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > concurrent mark-sweep generation total 524288K, used 0K [0xd4800000, > 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 28544K, used 28479K [0xf4800000, > 0xf63e0000, 0xf8800000) > 23.147: [ParNew: 261754K->0K(261952K), 0.1828999 secs] > 261754K->9956K(786240K) Heap after GC invocations=1: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 9956K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 28544K, used 28479K [0xf4800000, > 0xf63e0000, 0xf8800000) > } , 0.1843301 secs] > 34.492: [GC {Heap before GC invocations=1: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 9956K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 35200K, used 35085K [0xf4800000, > 0xf6a60000, 0xf8800000) > 34.492: [ParNew: 261760K->261760K(261952K), 0.0001138 secs]34.493: > [CMS (concurrent mode failure)[Unloading class > com.ibm.ws.Transaction.JTA.XARecUtil] > : 9956K->17291K(524288K), 2.1463156 secs] 271716K->17291K(786240K) > Heap after GC invocations=2: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 17291K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 35200K, used 35023K [0xf4800000, > 0xf6a60000, 0xf8800000) > } , 2.1476960 secs] > 495.514: [GC {Heap before GC invocations=2: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 17291K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 36096K, used 35999K [0xf4800000, > 0xf6b40000, 0xf8800000) > 495.515: [ParNew: 261760K->261760K(261952K), 0.0001167 secs]495.515: > [CMS (concurrent mode failure): 17291K->20296K(524288K), 1.7551403 > secs] 279051K->20296K(786240K) Heap after GC invocations=3: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 20296K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 36096K, used 35983K [0xf4800000, > 0xf6b40000, 0xf8800000) > } , 1.7565744 secs] > 660.887: [GC {Heap before GC invocations=3: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 20296K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 36992K, used 36930K [0xf4800000, > 0xf6c20000, 0xf8800000) > 660.888: [ParNew: 261760K->261760K(261952K), 0.0001172 secs]660.888: > [CMS (concurrent mode failure): 20296K->23930K(524288K), 1.9326939 > secs] 282056K->23930K(786240K) Heap after GC invocations=4: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 23930K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 36992K, used 36908K [0xf4800000, > 0xf6c20000, 0xf8800000) > } , 1.9341771 secs] > 755.905: [GC {Heap before GC invocations=4: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 23930K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38528K, used 38413K [0xf4800000, > 0xf6da0000, 0xf8800000) > 755.906: [ParNew: 261760K->261760K(261952K), 0.0001159 secs]755.906: > [CMS (concurrent mode failure): 23930K->23518K(524288K), 1.9600006 > secs] 285690K->23518K(786240K) Heap after GC invocations=5: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 23518K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38528K, used 38377K [0xf4800000, > 0xf6da0000, 0xf8800000) > } , 1.9614396 secs] > 832.698: [GC {Heap before GC invocations=5: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 23518K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38633K [0xf4800000, > 0xf6de0000, 0xf8800000) > 832.699: [ParNew: 261760K->261760K(261952K), 0.0001141 secs]832.699: > [CMS (concurrent mode failure)[Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor1] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor22] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor21] > : 23518K->22650K(524288K), 2.0814234 secs] 285278K->22650K(786240K) > Heap after GC invocations=6: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 22650K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38563K [0xf4800000, > 0xf6de0000, 0xf8800000) > } , 2.0828510 secs] > 900.246: [GC {Heap before GC invocations=6: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 22650K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38664K [0xf4800000, > 0xf6de0000, 0xf8800000) > 900.247: [ParNew: 261760K->261760K(261952K), 0.0001151 secs]900.247: > [CMS (concurrent mode failure)[Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor15] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor40] > [Unloading class sun.reflect.GeneratedMethodAccessor2] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor36] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor20] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor34] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor31] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor29] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor16] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor13] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor39] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor27] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor32] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor67] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor10] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor35] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor17] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor11] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor26] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor28] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor18] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor38] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor33] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor25] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor6] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor7] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor37] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor9] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor8] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor4] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor42] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor30] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor3] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor14] > : 22650K->21627K(524288K), 2.0554604 secs] 284410K->21627K(786240K) > Heap after GC invocations=7: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21627K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38530K [0xf4800000, > 0xf6de0000, 0xf8800000) > } , 2.0569150 secs] > 973.776: [GC {Heap before GC invocations=7: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21627K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38606K [0xf4800000, > 0xf6de0000, 0xf8800000) > 973.777: [ParNew: 261760K->261760K(261952K), 0.0001188 secs]973.777: > [CMS (concurrent mode failure): 21627K->21663K(524288K), 1.9085358 > secs] 283387K->21663K(786240K) Heap after GC invocations=8: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21663K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38573K [0xf4800000, > 0xf6de0000, 0xf8800000) > } , 1.9100083 secs] > 1044.718: [GC {Heap before GC invocations=8: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21663K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38647K [0xf4800000, > 0xf6de0000, 0xf8800000) > 1044.719: [ParNew: 261760K->261760K(261952K), 0.0001185 secs]1044.719: > [CMS (concurrent mode failure): 21663K->21546K(524288K), 1.8343919 > secs] 283423K->21546K(786240K) Heap after GC invocations=9: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21546K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38615K [0xf4800000, > 0xf6de0000, 0xf8800000) > } , 1.8358148 secs] > 1115.795: [GC {Heap before GC invocations=9: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21546K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38701K [0xf4800000, > 0xf6de0000, 0xf8800000) > 1115.795: [ParNew: 261760K->261760K(261952K), 0.0001158 secs]1115.796: > [CMS (concurrent mode failure): 21546K->20650K(524288K), 1.8198966 > secs] 283306K->20650K(786240K) Heap after GC invocations=10: > primadm at condo102# head -200 native_stdout.log |more > 23.144: [GC {Heap before GC invocations=0: > Heap > par new generation total 261952K, used 261754K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 99% used [0xc4800000, 0xd479eb68, 0xd47a0000) > from space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > to space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > concurrent mark-sweep generation total 524288K, used 0K [0xd4800000, > 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 28544K, used 28479K [0xf4800000, > 0xf63e0000, 0xf8800000) > 23.147: [ParNew: 261754K->0K(261952K), 0.1828999 secs] > 261754K->9956K(786240K) Heap after GC invoca > tions=1: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 9956K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 28544K, used 28479K [0xf4800000, > 0xf63e0000, 0xf8800000) > } , 0.1843301 secs] > 34.492: [GC {Heap before GC invocations=1: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 9956K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 35200K, used 35085K [0xf4800000, > 0xf6a60000, 0xf8800000) > 34.492: [ParNew: 261760K->261760K(261952K), 0.0001138 secs]34.493: > [CMS (concurrent mode failure)[U > nloading class com.ibm.ws.Transaction.JTA.XARecUtil] > : 9956K->17291K(524288K), 2.1463156 secs] 271716K->17291K(786240K) > Heap after GC invocations=2: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 17291K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 35200K, used 35023K [0xf4800000, > 0xf6a60000, 0xf8800000) > } , 2.1476960 secs] > 495.514: [GC {Heap before GC invocations=2: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 17291K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 36096K, used 35999K [0xf4800000, > 0xf6b40000, 0xf8800000) > 495.515: [ParNew: 261760K->261760K(261952K), 0.0001167 secs]495.515: > [CMS (concurrent mode failure) > : 17291K->20296K(524288K), 1.7551403 secs] 279051K->20296K(786240K) > Heap after GC invocations=3: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 20296K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 36096K, used 35983K [0xf4800000, > 0xf6b40000, 0xf8800000) > } , 1.7565744 secs] > 660.887: [GC {Heap before GC invocations=3: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 20296K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 36992K, used 36930K [0xf4800000, > 0xf6c20000, 0xf8800000) > 660.888: [ParNew: 261760K->261760K(261952K), 0.0001172 secs]660.888: > [CMS (concurrent mode failure) > : 20296K->23930K(524288K), 1.9326939 secs] 282056K->23930K(786240K) > Heap after GC invocations=4: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 23930K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 36992K, used 36908K [0xf4800000, > 0xf6c20000, 0xf8800000) > } , 1.9341771 secs] > 755.905: [GC {Heap before GC invocations=4: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 23930K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38528K, used 38413K [0xf4800000, > 0xf6da0000, 0xf8800000) > 755.906: [ParNew: 261760K->261760K(261952K), 0.0001159 secs]755.906: > [CMS (concurrent mode failure) > : 23930K->23518K(524288K), 1.9600006 secs] 285690K->23518K(786240K) > Heap after GC invocations=5: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 23518K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38528K, used 38377K [0xf4800000, > 0xf6da0000, 0xf8800000) > } , 1.9614396 secs] > 832.698: [GC {Heap before GC invocations=5: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 23518K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38633K [0xf4800000, > 0xf6de0000, 0xf8800000) > 832.699: [ParNew: 261760K->261760K(261952K), 0.0001141 secs]832.699: > [CMS (concurrent mode failure) > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor1] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor22] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor21] > : 23518K->22650K(524288K), 2.0814234 secs] 285278K->22650K(786240K) > Heap after GC invocations=6: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 22650K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38563K [0xf4800000, > 0xf6de0000, 0xf8800000) > } , 2.0828510 secs] > 900.246: [GC {Heap before GC invocations=6: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 22650K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38664K [0xf4800000, > 0xf6de0000, 0xf8800000) > 900.247: [ParNew: 261760K->261760K(261952K), 0.0001151 secs]900.247: > [CMS (concurrent mode failure) > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor15] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor40] > [Unloading class sun.reflect.GeneratedMethodAccessor2] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor36] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor20] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor34] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor31] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor29] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor16] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor13] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor39] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor27] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor32] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor67] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor10] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor35] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor17] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor11] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor26] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor28] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor18] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor38] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor33] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor25] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor6] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor7] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor37] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor9] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor8] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor4] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor42] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor30] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor3] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor14] > : 22650K->21627K(524288K), 2.0554604 secs] 284410K->21627K(786240K) > Heap after GC invocations=7: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21627K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38530K [0xf4800000, > 0xf6de0000, 0xf8800000) > } , 2.0569150 secs] > 973.776: [GC {Heap before GC invocations=7: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21627K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38606K [0xf4800000, > 0xf6de0000, 0xf8800000) > 973.777: [ParNew: 261760K->261760K(261952K), 0.0001188 secs]973.777: > [CMS (concurrent mode failure) > : 21627K->21663K(524288K), 1.9085358 secs] 283387K->21663K(786240K) > Heap after GC invocations=8: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21663K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38573K [0xf4800000, > 0xf6de0000, 0xf8800000) > } , 1.9100083 secs] > 1044.718: [GC {Heap before GC invocations=8: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21663K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38647K [0xf4800000, > 0xf6de0000, 0xf8800000) > 1044.719: [ParNew: 261760K->261760K(261952K), 0.0001185 secs]1044.719: > [CMS (concurrent mode failur > e): 21663K->21546K(524288K), 1.8343919 secs] 283423K->21546K(786240K) > Heap after GC invocations=9: > Heap > par new generation total 261952K, used 0K [0xc4800000, 0xd4800000, > 0xd4800000) > eden space 261760K, 0% used [0xc4800000, 0xc4800000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21546K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38615K [0xf4800000, > 0xf6de0000, 0xf8800000) > } , 1.8358148 secs] > 1115.795: [GC {Heap before GC invocations=9: > Heap > par new generation total 261952K, used 261760K [0xc4800000, > 0xd4800000, 0xd4800000) > eden space 261760K, 100% used [0xc4800000, 0xd47a0000, 0xd47a0000) > from space 192K, 0% used [0xd47d0000, 0xd47d0000, 0xd4800000) > to space 192K, 0% used [0xd47a0000, 0xd47a0000, 0xd47d0000) > concurrent mark-sweep generation total 524288K, used 21546K > [0xd4800000, 0xf4800000, 0xf4800000) > concurrent-mark-sweep perm gen total 38784K, used 38701K [0xf4800000, > 0xf6de0000, 0xf8800000) > 1115.795: [ParNew: 261760K->261760K(261952K), 0.0001158 secs]1115.796: > [CMS (concurrent mode failur > e): 21546K->20650K(524288K), 1.8198966 secs] 283306K->20650K(786240K) > Heap after GC invocations=10: > > > > > ------------------------------------------------------------------------ > Windows Live? SkyDrive?: Get 25 GB of free online storage. Check it > out. > > > >------------------------------------------------------------------------ > >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > From Y.S.Ramakrishna at Sun.COM Mon Apr 27 09:13:48 2009 From: Y.S.Ramakrishna at Sun.COM (Y.S.Ramakrishna at Sun.COM) Date: Mon, 27 Apr 2009 09:13:48 -0700 Subject: concurrent mode failure in minor gc/ -XX:CMSFullGCsBeforeCompaction=1 In-Reply-To: <49F546ED.3030700@sun.com> References: <49ED3370.2000701@sun.com> <49ED4249.2090809@sun.com> <49F546ED.3030700@sun.com> Message-ID: <49F5D9BC.8090806@Sun.COM> What Jon said re contacting Sun support etc. One important thing we have learned about CMS is that the filtering of short-lived objects in the young gen is important both for relieving short-term pressure on the CMS collector and in the long-term for reducing fragmentation of the old gen, either or both of which might otherwise cause concurrent mode failure. I have never myself found CMSFullGCsBeforeCompaction != 0 to be particularly useful as a tuning knob. regards. -- ramki >> 1) >> >> Options used : >> -server -Xmx768m -Xms768m -XX:MaxNewSize=256m -XX:NewSize=256m >> -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC >> -XX:CMSInitiatingOccupancyFraction=60 -XX:+UseCMSInitiatingOccupancyOnly >> ... >> >> 2) >> >> Options used: >> -server -Xmx768m -Xms768m -XX:MaxNewSize=256m -XX:NewSize=256m >> -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC >> -XX:CMSInitiatingOccupancyFraction=60 >> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSFullGCsBeforeCompaction=1 >> ... > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Y.S.Ramakrishna at Sun.COM Mon Apr 27 09:17:03 2009 From: Y.S.Ramakrishna at Sun.COM (Y.S.Ramakrishna at Sun.COM) Date: Mon, 27 Apr 2009 09:17:03 -0700 Subject: concurrent mode failure in minor gc/ -XX:CMSFullGCsBeforeCompaction=1 In-Reply-To: <49F5D9BC.8090806@Sun.COM> References: <49ED3370.2000701@sun.com> <49ED4249.2090809@sun.com> <49F546ED.3030700@sun.com> <49F5D9BC.8090806@Sun.COM> Message-ID: <49F5DA7F.3040105@Sun.COM> > One important thing we have learned about CMS is > that the filtering of short-lived objects in the young gen is > important both for relieving short-term pressure on the CMS collector > and in the long-term for reducing fragmentation of the old > gen, either or both of which might otherwise cause concurrent mode failure. Rereading that, I see that it was too cryptic. What I meant was that use of the survivor spaces (and at _last_ a tenuring threshold of 1, if not higher -- use PrintTenuringDistribution to tune an optimum) is crucial to getting good performance out of CMS. - ramki From Jon.Masamitsu at Sun.COM Mon Apr 27 10:30:02 2009 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 27 Apr 2009 10:30:02 -0700 Subject: concurrent mode failure in minor gc/ -XX:CMSFullGCsBeforeCompaction=1 In-Reply-To: <49F5DA7F.3040105@Sun.COM> References: <49ED3370.2000701@sun.com> <49ED4249.2090809@sun.com> <49F546ED.3030700@sun.com> <49F5D9BC.8090806@Sun.COM> <49F5DA7F.3040105@Sun.COM> Message-ID: <49F5EB9A.5060103@sun.com> Ramki's mail reminded me that you might find these useful if you have not already seen them. http://blogs.sun.com/jonthecollector/entry/the_fault_with_defaults http://blogs.sun.com/jonthecollector/entry/what_the_heck_s_a Y.S.Ramakrishna at Sun.COM wrote On 04/27/09 09:17,: >>One important thing we have learned about CMS is >>that the filtering of short-lived objects in the young gen is >>important both for relieving short-term pressure on the CMS collector >>and in the long-term for reducing fragmentation of the old >>gen, either or both of which might otherwise cause concurrent mode failure. >> >> > >Rereading that, I see that it was too cryptic. What I meant was >that use of the survivor spaces (and at _last_ a tenuring threshold of >1, if not higher -- use PrintTenuringDistribution to tune an optimum) >is crucial to getting good performance out of CMS. > >- ramki >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > >