From pingavinash at yahoo.com Wed Sep 11 01:24:22 2013 From: pingavinash at yahoo.com (Avinash Mishra) Date: Wed, 11 Sep 2013 16:24:22 +0800 (SGT) Subject: Need help on long minor GC pause Message-ID: <1378887862.18876.YahooMailNeo@web192505.mail.sg3.yahoo.com> Hi folks, We are facing a strange issue where the minor GC gets stuck for several seconds: 734827.324: [GC 639235K->587610K(1042048K), 2.4140118 secs] 734846.488: [GC 640090K->589859K(1042048K), 22.1046232 secs] 734949.577: [GC 642339K->590078K(1042048K), 12.2527731 secs] 735045.592: [GC 642558K->591084K(1042048K), 0.1158979 secs] ?Our Java configuration is something like this: java -Xms1024m -Xmx1024m -XX:+UseConcMarkSweepGC -cp %MY_CLASSPATH% com.myclass We are using Java 1.6u24 and the application is running on a Windows 2003 server. The issue is specific to a couple of servers at a customer site and nowhere else. The servers here run fine for up to some 45 days when the issue shows up. From the logs(above) it seems that there is plenty of old space available on heap so we have ruled out concurrent failures. We are looking into fragmentation as the potential root cause. We have enabled these traces to confirm if we have a fragmentation issue -XX:+PrintGCDetails -XX:+PrintPromotionFailure -XX:PrintFLSStatistics=1 From these logs we are monitoring "Max chunk available" and look for a decrease in trend to determine if the servers are heading towards a fragmentation problem. Please let me know if anything else is required to confirm on fragmentation issue and if there is a a way to confirm it sooner instead of waiting for couple of months. Please let me know if you have any other pointers on potential root cause. Thanks, Avinash -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20130911/6c3f9d1f/attachment-0001.html From jwu at gmx.ch Wed Sep 11 02:54:16 2013 From: jwu at gmx.ch (=?ISO-8859-1?Q?W=FCthrich_J=F6rg?=) Date: Wed, 11 Sep 2013 11:54:16 +0200 Subject: Need help on long minor GC pause In-Reply-To: <1378887862.18876.YahooMailNeo@web192505.mail.sg3.yahoo.com> References: <1378887862.18876.YahooMailNeo@web192505.mail.sg3.yahoo.com> Message-ID: <52303DC8.5000801@gmx.ch> Hi Avinash, If it turns out to be a fragmentation issue, you might want to consider upgrading to a newer java version, because 1.6u25 and above have serious improvements concerning fragmentation (http://blog.ragozin.info/2011/10/java-cg-hotspots-cms-and-heap.html). One place where I saw very long pauses with fragmentation was when concurrent mode failures occurred. I am not sure if fragmentation affects minor GCs so much. You might also want to follow another lead: swapping. GC logs also contain the "|Times:user=2.19sys=1.35,real=385.50secs]" in the end of the lines. If "real" is much longer than "user"+"sys" your system might be swapping.| ||| Regards, J?rg |Am 11.09.2013 10:24, schrieb Avinash Mishra: > Hi folks, > > We are facing a strange issue where the minor GC gets stuck for > several seconds: > > 734827.324: [GC 639235K->587610K(1042048K), 2.4140118 secs] > 734846.488: [GC 640090K->589859K(1042048K), 22.1046232 secs] > 734949.577: [GC 642339K->590078K(1042048K), 12.2527731 secs] > 735045.592: [GC 642558K->591084K(1042048K), 0.1158979 secs] > Our Java configuration is something like this: > > java -Xms1024m -Xmx1024m -XX:+UseConcMarkSweepGC -cp %MY_CLASSPATH% > com.myclass > > We are using Java 1.6u24 and the application is running on a Windows > 2003 server. > > The issue is specific to a couple of servers at a customer site and > nowhere else. The servers here run fine for up to some 45 days when > the issue shows up. > > From the logs(above) it seems that there is plenty of old space > available on heap so we have ruled out concurrent failures. We are > looking into fragmentation as the potential root cause. We have > enabled these traces to confirm if we have a fragmentation issue > > -XX:+PrintGCDetails > -XX:+PrintPromotionFailure > -XX:PrintFLSStatistics=1 > > From these logs we are monitoring "Max chunk available" and look for a > decrease in trend to determine if the servers are heading towards a > fragmentation problem. Please let me know if anything else is required > to confirm on fragmentation issue and if there is a a way to confirm > it sooner instead of waiting for couple of months. > > Please let me know if you have any other pointers on potential root cause. > > Thanks, > Avinash -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20130911/d331fa67/attachment-0001.html From pingavinash at yahoo.com Thu Sep 12 13:13:33 2013 From: pingavinash at yahoo.com (Avinash Mishra) Date: Fri, 13 Sep 2013 04:13:33 +0800 (SGT) Subject: Need help on long minor GC pause In-Reply-To: References: Message-ID: <1379016813.83652.YahooMailNeo@web192502.mail.sg3.yahoo.com> Hi folks, Thanks for the quick feedback. We got lucky with the issue and got these logs today: 120850.013: [ParNew120865.142: [SoftReference, 0 refs, 0.0000052 secs]120865.142: [WeakReference, 1 refs, 0.0000022 secs]120865.142: [FinalReference, 0 refs, 0.0000014 secs]120865.142: [PhantomReference, 2 refs, 0.0000018 secs]120865.142: [JNI Weak Reference, 0.0000014 secs]: 59007K->6528K(59008K), 15.1290884 secs] 893600K->843004K(1042048K)After GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 37142932 Max?? Chunk Size: 19517862 Number of Blocks: 20 Av.? Block? Size: 1857146 Tree????? Height: 8 After GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 3635496 Max?? Chunk Size: 3634176 Number of Blocks: 3 Av.? Block? Size: 1211832 Tree????? Height: 2 , 15.1295291 secs] [Times: user=0.13 sys=0.00, real=15.13 secs] So you were probably right about swapping to be the potential issue. The strange thing is that its we have 8GB of RAM on the server and only 2.5 GB is used. Could you please suggest us on how can we confirm if its a swapping issue (we do have perfmons). Also, please suggest how we can fix this problem on Windows server. Regards, Avinash ________________________________ From: "hotspot-gc-use-request at openjdk.java.net" To: hotspot-gc-use at openjdk.java.net Sent: Wednesday, 11 September 2013 3:24 PM Subject: hotspot-gc-use Digest, Vol 67, Issue 2 Send hotspot-gc-use mailing list submissions to ??? hotspot-gc-use at openjdk.java.net To subscribe or unsubscribe via the World Wide Web, visit ??? http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use or, via email, send a message with subject or body 'help' to ??? hotspot-gc-use-request at openjdk.java.net You can reach the person managing the list at ??? hotspot-gc-use-owner at openjdk.java.net When replying, please edit your Subject line so it is more specific than "Re: Contents of hotspot-gc-use digest..." Today's Topics: ? 1. Re: Need help on long minor GC pause (W?thrich J?rg) ---------------------------------------------------------------------- Message: 1 Date: Wed, 11 Sep 2013 11:54:16 +0200 From: W?thrich J?rg Subject: Re: Need help on long minor GC pause To: hotspot-gc-use at openjdk.java.net Message-ID: <52303DC8.5000801 at gmx.ch> Content-Type: text/plain; charset="iso-8859-1" Hi Avinash, If it turns out to be a fragmentation issue, you might want to consider upgrading to a newer java version, because 1.6u25 and above have serious improvements concerning fragmentation (http://blog.ragozin.info/2011/10/java-cg-hotspots-cms-and-heap.html). One place where I saw very long pauses with fragmentation was when concurrent mode failures occurred. I am not sure if fragmentation affects minor GCs so much. You might also want to follow another lead: swapping. GC logs also contain the "|Times:user=2.19sys=1.35,real=385.50secs]" in the end of the lines. If "real" is much longer than "user"+"sys" your system might be swapping.| ||| Regards, J?rg |Am 11.09.2013 10:24, schrieb Avinash Mishra: > Hi folks, > > We are facing a strange issue where the minor GC gets stuck for > several seconds: > > 734827.324: [GC 639235K->587610K(1042048K), 2.4140118 secs] > 734846.488: [GC 640090K->589859K(1042048K), 22.1046232 secs] > 734949.577: [GC 642339K->590078K(1042048K), 12.2527731 secs] > 735045.592: [GC 642558K->591084K(1042048K), 0.1158979 secs] > Our Java configuration is something like this: > > java -Xms1024m -Xmx1024m -XX:+UseConcMarkSweepGC -cp %MY_CLASSPATH% > com.myclass > > We are using Java 1.6u24 and the application is running on a Windows > 2003 server. > > The issue is specific to a couple of servers at a customer site and > nowhere else. The servers here run fine for up to some 45 days when > the issue shows up. > > From the logs(above) it seems that there is plenty of old space > available on heap so we have ruled out concurrent failures. We are > looking into fragmentation as the potential root cause. We have > enabled these traces to confirm if we have a fragmentation issue > > -XX:+PrintGCDetails > -XX:+PrintPromotionFailure > -XX:PrintFLSStatistics=1 > > From these logs we are monitoring "Max chunk available" and look for a > decrease in trend to determine if the servers are heading towards a > fragmentation problem. Please let me know if anything else is required > to confirm on fragmentation issue and if there is a a way to confirm > it sooner instead of waiting for couple of months. > > Please let me know if you have any other pointers on potential root cause. > > Thanks, > Avinash -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20130911/d331fa67/attachment.html ------------------------------ _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use End of hotspot-gc-use Digest, Vol 67, Issue 2 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20130913/c28127bb/attachment.html From pingavinash at yahoo.com Thu Sep 12 13:17:23 2013 From: pingavinash at yahoo.com (Avinash Mishra) Date: Fri, 13 Sep 2013 04:17:23 +0800 (SGT) Subject: Need help on long minor GC pause In-Reply-To: <1379016813.83652.YahooMailNeo@web192502.mail.sg3.yahoo.com> References: <1379016813.83652.YahooMailNeo@web192502.mail.sg3.yahoo.com> Message-ID: <1379017043.57686.YahooMailNeo@web192506.mail.sg3.yahoo.com> Here is the entire GC log we have from the tests: 2013-09-12T10:06:59.222-0700: 120850.013: [GC Before GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 37567788 Max?? Chunk Size: 19517862 Number of Blocks: 21 Av.? Block? Size: 1788942 Tree????? Height: 8 Before GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 3635496 Max?? Chunk Size: 3634176 Number of Blocks: 3 Av.? Block? Size: 1211832 Tree????? Height: 2 120850.013: [ParNew120865.142: [SoftReference, 0 refs, 0.0000052 secs]120865.142: [WeakReference, 1 refs, 0.0000022 secs]120865.142: [FinalReference, 0 refs, 0.0000014 secs]120865.142: [PhantomReference, 2 refs, 0.0000018 secs]120865.142: [JNI Weak Reference, 0.0000014 secs]: 59007K->6528K(59008K), 15.1290884 secs] 893600K->843004K(1042048K)After GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 37142932 Max?? Chunk Size: 19517862 Number of Blocks: 20 Av.? Block? Size: 1857146 Tree????? Height: 8 After GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 3635496 Max?? Chunk Size: 3634176 Number of Blocks: 3 Av.? Block? Size: 1211832 Tree????? Height: 2 , 15.1295291 secs] [Times: user=0.13 sys=0.00, real=15.13 secs] Total time for which application threads were stopped: 15.1305786 seconds Application time: 0.0054168 seconds Total time for which application threads were stopped: 0.0454332 seconds Application time: 0.0003027 seconds Total time for which application threads were stopped: 0.0005096 seconds Application time: 0.0208497 seconds Total time for which application threads were stopped: 0.4512979 seconds Application time: 1.7480473 seconds Total time for which application threads were stopped: 0.4840827 seconds Application time: 0.0001481 seconds Total time for which application threads were stopped: 0.5162972 seconds Application time: 0.9989582 seconds Total time for which application threads were stopped: 1.9044960 seconds Application time: 0.0033451 seconds Total time for which application threads were stopped: 0.0078556 seconds Application time: 0.0005994 seconds Total time for which application threads were stopped: 0.0347992 seconds Application time: 0.0001253 seconds Total time for which application threads were stopped: 0.0004586 seconds Application time: 0.0001029 seconds Total time for which application threads were stopped: 0.0004276 seconds Application time: 0.0020670 seconds Total time for which application threads were stopped: 0.0328372 seconds Application time: 0.0003629 seconds Total time for which application threads were stopped: 0.0407637 seconds Application time: 0.0004604 seconds Total time for which application threads were stopped: 0.0095295 seconds Application time: 0.0013711 seconds Total time for which application threads were stopped: 0.4448677 seconds Application time: 0.0010127 seconds Total time for which application threads were stopped: 0.1862938 seconds Application time: 0.0004584 seconds Total time for which application threads were stopped: 2.3634920 seconds Application time: 0.0006381 seconds Total time for which application threads were stopped: 1.2755493 seconds Application time: 0.0000791 seconds Total time for which application threads were stopped: 0.0003866 seconds Application time: 0.0004948 seconds Total time for which application threads were stopped: 0.0070434 seconds Application time: 0.0023128 seconds Total time for which application threads were stopped: 0.0105626 seconds Application time: 0.0010711 seconds Total time for which application threads were stopped: 0.0265656 seconds Application time: 0.0008076 seconds Total time for which application threads were stopped: 0.0599900 seconds Application time: 0.0001002 seconds Total time for which application threads were stopped: 0.0205268 seconds Application time: 0.0072575 seconds Total time for which application threads were stopped: 0.9417735 seconds Application time: 0.0005076 seconds Total time for which application threads were stopped: 0.7657093 seconds Application time: 0.0017558 seconds Total time for which application threads were stopped: 0.1212358 seconds Application time: 0.0014846 seconds Total time for which application threads were stopped: 0.5607651 seconds Application time: 0.0051868 seconds Total time for which application threads were stopped: 0.3071135 seconds Application time: 0.0002792 seconds Total time for which application threads were stopped: 0.0761226 seconds Application time: 0.0008392 seconds Total time for which application threads were stopped: 0.0004832 seconds Application time: 0.0022389 seconds Total time for which application threads were stopped: 0.0004443 seconds Application time: 0.0003162 seconds Total time for which application threads were stopped: 0.0003712 seconds Application time: 0.0003393 seconds Total time for which application threads were stopped: 0.0004577 seconds Application time: 0.0011367 seconds Total time for which application threads were stopped: 0.0004018 seconds Application time: 2.1982655 seconds Total time for which application threads were stopped: 0.0008523 seconds Application time: 1.9970691 seconds Total time for which application threads were stopped: 0.0168507 seconds Application time: 0.9985377 seconds Total time for which application threads were stopped: 0.0826776 seconds Application time: 5.7725220 seconds Total time for which application threads were stopped: 0.1754836 seconds Application time: 0.0000573 seconds Total time for which application threads were stopped: 0.0031288 seconds Application time: 0.0003676 seconds Total time for which application threads were stopped: 0.0059951 seconds Application time: 0.9894649 seconds Total time for which application threads were stopped: 0.0008202 seconds Application time: 1.9994647 seconds Total time for which application threads were stopped: 0.0008728 seconds Application time: 1.9979438 seconds Total time for which application threads were stopped: 0.0008308 seconds Application time: 7.9974667 seconds Total time for which application threads were stopped: 0.0008303 seconds Application time: 2.9985472 seconds Total time for which application threads were stopped: 0.0008017 seconds Application time: 4.7325503 seconds Total time for which application threads were stopped: 0.0008595 seconds Application time: 0.0304200 seconds Total time for which application threads were stopped: 0.0003669 seconds Application time: 1.2493383 seconds Total time for which application threads were stopped: 0.0008296 seconds Application time: 0.3115816 seconds Total time for which application threads were stopped: 0.0007282 seconds Application time: 0.0616963 seconds Total time for which application threads were stopped: 0.0004873 seconds Application time: 0.3744796 seconds Total time for which application threads were stopped: 0.0007073 seconds Application time: 0.2023382 seconds Total time for which application threads were stopped: 0.0004948 seconds Application time: 0.1857517 seconds ________________________________ From: Avinash Mishra To: "hotspot-gc-use at openjdk.java.net" Sent: Friday, 13 September 2013 1:43 AM Subject: Need help on long minor GC pause Hi folks, Thanks for the quick feedback. We got lucky with the issue and got these logs today: 120850.013: [ParNew120865.142: [SoftReference, 0 refs, 0.0000052 secs]120865.142: [WeakReference, 1 refs, 0.0000022 secs]120865.142: [FinalReference, 0 refs, 0.0000014 secs]120865.142: [PhantomReference, 2 refs, 0.0000018 secs]120865.142: [JNI Weak Reference, 0.0000014 secs]: 59007K->6528K(59008K), 15.1290884 secs] 893600K->843004K(1042048K)After GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 37142932 Max?? Chunk Size: 19517862 Number of Blocks: 20 Av.? Block? Size: 1857146 Tree????? Height: 8 After GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 3635496 Max?? Chunk Size: 3634176 Number of Blocks: 3 Av.? Block? Size: 1211832 Tree????? Height: 2 , 15.1295291 secs] [Times: user=0.13 sys=0.00, real=15.13 secs] So you were probably right about swapping to be the potential issue. The strange thing is that its we have 8GB of RAM on the server and only 2.5 GB is used. Could you please suggest us on how can we confirm if its a swapping issue (we do have perfmons). Also, please suggest how we can fix this problem on Windows server. Regards, Avinash ________________________________ From: "hotspot-gc-use-request at openjdk.java.net" To: hotspot-gc-use at openjdk.java.net Sent: Wednesday, 11 September 2013 3:24 PM Subject: hotspot-gc-use Digest, Vol 67, Issue 2 Send hotspot-gc-use mailing list submissions to ??? hotspot-gc-use at openjdk.java.net To subscribe or unsubscribe via the World Wide Web, visit ??? http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use or, via email, send a message with subject or body 'help' to ??? hotspot-gc-use-request at openjdk.java.net You can reach the person managing the list at ??? hotspot-gc-use-owner at openjdk.java.net When replying, please edit your Subject line so it is more specific than "Re: Contents of hotspot-gc-use digest..." Today's Topics: ? 1. Re: Need help on long minor GC pause (W?thrich J?rg) ---------------------------------------------------------------------- Message: 1 Date: Wed, 11 Sep 2013 11:54:16 +0200 From: W?thrich J?rg Subject: Re: Need help on long minor GC pause To: hotspot-gc-use at openjdk.java.net Message-ID: <52303DC8.5000801 at gmx.ch> Content-Type: text/plain; charset="iso-8859-1" Hi Avinash, If it turns out to be a fragmentation issue, you might want to consider upgrading to a newer java version, because 1.6u25 and above have serious improvements concerning fragmentation (http://blog.ragozin.info/2011/10/java-cg-hotspots-cms-and-heap.html). One place where I saw very long pauses with fragmentation was when concurrent mode failures occurred. I am not sure if fragmentation affects minor GCs so much. You might also want to follow another lead: swapping. GC logs also contain the "|Times:user=2.19sys=1.35,real=385.50secs]" in the end of the lines. If "real" is much longer than "user"+"sys" your system might be swapping.| ||| Regards, J?rg |Am 11.09.2013 10:24, schrieb Avinash Mishra: > Hi folks, > > We are facing a strange issue where the minor GC gets stuck for > several seconds: > > 734827.324: [GC 639235K->587610K(1042048K), 2.4140118 secs] > 734846.488: [GC 640090K->589859K(1042048K), 22.1046232 secs] > 734949.577: [GC 642339K->590078K(1042048K), 12.2527731 secs] > 735045.592: [GC 642558K->591084K(1042048K), 0.1158979 secs] > Our Java configuration is something like this: > > java -Xms1024m -Xmx1024m -XX:+UseConcMarkSweepGC -cp %MY_CLASSPATH% > com.myclass > > We are using Java 1.6u24 and the application is running on a Windows > 2003 server. > > The issue is specific to a couple of servers at a customer site and > nowhere else. The servers here run fine for up to some 45 days when > the issue shows up. > > From the logs(above) it seems that there is plenty of old space > available on heap so we have ruled out concurrent failures. We are > looking into fragmentation as the potential root cause. We have > enabled these traces to confirm if we have a fragmentation issue > > -XX:+PrintGCDetails > -XX:+PrintPromotionFailure > -XX:PrintFLSStatistics=1 > > From these logs we are monitoring "Max chunk available" and look for a > decrease in trend to determine if the servers are heading towards a > fragmentation problem. Please let me know if anything else is required > to confirm on fragmentation issue and if there is a a way to confirm > it sooner instead of waiting for couple of months. > > Please let me know if you have any other pointers on potential root cause. > > Thanks, > Avinash -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20130911/d331fa67/attachment.html ------------------------------ _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use End of hotspot-gc-use Digest, Vol 67, Issue 2 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20130913/110b9875/attachment-0001.html From anmulholland at expedia.com Fri Sep 13 08:53:16 2013 From: anmulholland at expedia.com (Andrew Mulholland) Date: Fri, 13 Sep 2013 16:53:16 +0100 Subject: ParNew 4x slower under 1.7 vs 1.6? In-Reply-To: Message-ID: Hi Jon Many thanks for your help in this. To close off this thread, Jon provided help to us in investigating this issue. We conducted some tests which demonstrated it wasn't changes in the VM causing our slowdown. (Jon directed me to create a sort of hybrid JVM by copying the libjvm.so from 1.7.0_25 into a 1.6.0_43 JDK). We spent out time looking into what was different between 6 and 7 outside of the JVM... and through a process of elimination identified that the change in JAXB version from 2.1 to 2.2 which came with JDK7 was causing the change in performance. By loading the jaxb 2.1 jars into our app, rather than relying on the version provided by the JDK, we now see a similar level of performance under JDK7 as we did under JDK6. (also for those interested by loading the jaxb 2.2 jars into our app, we see the slowdown under JDK6 too). We're investigating what the differences are between JAXB 2.1 and 2.2 and how we use it, as suspect it could be (our) misuse of the library which is to blame here! Thanks Andrew Andrew Mulholland Operations Architect Expedia Affiliate Network p: +44 (0)20 7019 2927 | m: +44 (0)77 1585 4475 e: anmulholland at expedia.com On 8/12/13 10:57 AM, "Andrew Mulholland" wrote: >Hi Jon > > >On 8/12/13 5:50 AM, "Jon Masamitsu" wrote: > >> >>I saw your response to Bernd where you verified >>the number of GC threads that you are using but the >>amount of parallelism your getting still doesn't seem to >>be right. See below. > > >Thanks - data below as requested: > >> >>, 0.0989930 secs] [Times: user=0.46 sys=0.02, real=0.10 secs] >> >> >> >>About a 4 times speed up here (user / real ~ 4.6) >> > >For JVM 1.6.0_43: > >90% of ParNew Gcs had parallelism of at least: 5.17 >75% of ParNew Gcs had parallelism of at least: 5.83 >50% of ParNew Gcs had parallelism of at least: 6.71 >25% of ParNew Gcs had parallelism of at least 7.66 >10% of ParNew Gcs had parallelism of at least 8.5 > >Average parallelism for ParNew Gs: 6.75 (stdev 1.21) >Max Parallelism for ParNew GCs: 9 > > >> >>, 0.4156130 secs] [Times: user=0.65 sys=0.02, real=0.42 secs] >> >> >>Only about a 1.5 times speed up here (user / real ~1.5). > >For JVM 1.7.0_25: > >90% of ParNew Gcs had parallelism of at least 1.53 > >75% of ParNew Gcs had parallelism of at least 1.63 >50% of ParNew Gcs had parallelism of at least 2 >25% of ParNew Gcs had parallelism of at least: 2.72 >10% of ParNew Gcs had parallelism of at least: 5.32 > >Average parallelism for ParNew GCs: 2.58 (stdev 1.51) > >Max Parallelism for ParNew GCs: 8.6 > > > > > >> >>Can you check some other entries and verify that the amount of >>parallelism your seeing on 1.7 is only 1.5. >> > > >>From the above it seems that we do get higher than 1.5.. With jvm 1.7 - >but it is consistently significantly lower than on jvm 1.6 > >Thanks > > >Andrew > >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From bernd.eckenfels at googlemail.com Fri Sep 13 11:40:23 2013 From: bernd.eckenfels at googlemail.com (Bernd Eckenfels) Date: Fri, 13 Sep 2013 20:40:23 +0200 Subject: ParNew 4x slower under 1.7 vs 1.6? In-Reply-To: References: Message-ID: Hello, since the different workload from jaxb was very visible in your par-new times, I wonder if you can also research a bit in that direction: have been the average objectsize small or larger, has the object graph beeing deeper or more gc roots. I think having a heap dump of both jaxb versions with the same VM would be a good start to compare? (I am not sure if there are statistic tools for the above numbers, but I guess the class histogram can give a good first answer) Greetings Bernd Am 13.09.2013, 17:53 Uhr, schrieb Andrew Mulholland : > Hi Jon > > Many thanks for your help in this. > > To close off this thread, Jon provided help to us in investigating this > issue. > > We conducted some tests which demonstrated it wasn't changes in the VM > causing our slowdown. (Jon directed me to create a sort of hybrid JVM by > copying the libjvm.so from 1.7.0_25 into a 1.6.0_43 JDK). > > We spent out time looking into what was different between 6 and 7 outside > of the JVM... and through a process of elimination identified that the > change in JAXB version from 2.1 to 2.2 which came with JDK7 was causing > the change in performance. > > By loading the jaxb 2.1 jars into our app, rather than relying on the > version provided by the JDK, we now see a similar level of performance > under JDK7 as we did under JDK6. > > (also for those interested by loading the jaxb 2.2 jars into our app, we > see the slowdown under JDK6 too). > > > We're investigating what the differences are between JAXB 2.1 and 2.2 and > how we use it, as suspect it could be (our) misuse of the library which > is > to blame here! > > Thanks > > > Andrew > > > Andrew Mulholland > Operations Architect > Expedia Affiliate Network > p: +44 (0)20 7019 2927 | m: +44 (0)77 1585 4475 > e: anmulholland at expedia.com > > > > On 8/12/13 10:57 AM, "Andrew Mulholland" > wrote: > >> Hi Jon >> >> >> On 8/12/13 5:50 AM, "Jon Masamitsu" wrote: >> >>> >>> I saw your response to Bernd where you verified >>> the number of GC threads that you are using but the >>> amount of parallelism your getting still doesn't seem to >>> be right. See below. >> >> >> Thanks - data below as requested: >> >>> >>> , 0.0989930 secs] [Times: user=0.46 sys=0.02, real=0.10 secs] >>> >>> >>> >>> About a 4 times speed up here (user / real ~ 4.6) >>> >> >> For JVM 1.6.0_43: >> >> 90% of ParNew Gcs had parallelism of at least: 5.17 >> 75% of ParNew Gcs had parallelism of at least: 5.83 >> 50% of ParNew Gcs had parallelism of at least: 6.71 >> 25% of ParNew Gcs had parallelism of at least 7.66 >> 10% of ParNew Gcs had parallelism of at least 8.5 >> >> Average parallelism for ParNew Gs: 6.75 (stdev 1.21) >> Max Parallelism for ParNew GCs: 9 >> >> >>> >>> , 0.4156130 secs] [Times: user=0.65 sys=0.02, real=0.42 secs] >>> >>> >>> Only about a 1.5 times speed up here (user / real ~1.5). >> >> For JVM 1.7.0_25: >> >> 90% of ParNew Gcs had parallelism of at least 1.53 >> >> 75% of ParNew Gcs had parallelism of at least 1.63 >> 50% of ParNew Gcs had parallelism of at least 2 >> 25% of ParNew Gcs had parallelism of at least: 2.72 >> 10% of ParNew Gcs had parallelism of at least: 5.32 >> >> Average parallelism for ParNew GCs: 2.58 (stdev 1.51) >> >> Max Parallelism for ParNew GCs: 8.6 >> >> >> >> >> >>> >>> Can you check some other entries and verify that the amount of >>> parallelism your seeing on 1.7 is only 1.5. >>> >> >> >>> From the above it seems that we do get higher than 1.5.. With jvm 1.7 - >> but it is consistently significantly lower than on jvm 1.6 >> >> Thanks >> >> >> Andrew >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -- https://plus.google.com/u/1/108084227682171831683/about From ysr1729 at gmail.com Fri Sep 13 13:38:54 2013 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 13 Sep 2013 13:38:54 -0700 Subject: ParNew 4x slower under 1.7 vs 1.6? In-Reply-To: References: Message-ID: I am guessing a comparison of the gc log 7 vs 6 that Andrew sent in his first email on this thread might already provide useful info. But of course a comparison of logs where only the JVM has been swapped would probably be better and would provide a lot of the kind of inforation you are seeking. Good point about the extra info from class histogram (+PrintClassHistogram{Before,After}GC). -- ramki On Fri, Sep 13, 2013 at 11:40 AM, Bernd Eckenfels wrote: > Hello, > > since the different workload from jaxb was very visible in your par-new > times, I wonder if you can also research a bit in that direction: have > been the average objectsize small or larger, has the object graph beeing > deeper or more gc roots. > > I think having a heap dump of both jaxb versions with the same VM would be > a good start to compare? (I am not sure if there are statistic tools for > the above numbers, but I guess the class histogram can give a good first > answer) > > Greetings > Bernd > > > Am 13.09.2013, 17:53 Uhr, schrieb Andrew Mulholland > : > >> Hi Jon >> >> Many thanks for your help in this. >> >> To close off this thread, Jon provided help to us in investigating this >> issue. >> >> We conducted some tests which demonstrated it wasn't changes in the VM >> causing our slowdown. (Jon directed me to create a sort of hybrid JVM by >> copying the libjvm.so from 1.7.0_25 into a 1.6.0_43 JDK). >> >> We spent out time looking into what was different between 6 and 7 outside >> of the JVM... and through a process of elimination identified that the >> change in JAXB version from 2.1 to 2.2 which came with JDK7 was causing >> the change in performance. >> >> By loading the jaxb 2.1 jars into our app, rather than relying on the >> version provided by the JDK, we now see a similar level of performance >> under JDK7 as we did under JDK6. >> >> (also for those interested by loading the jaxb 2.2 jars into our app, we >> see the slowdown under JDK6 too). >> >> >> We're investigating what the differences are between JAXB 2.1 and 2.2 and >> how we use it, as suspect it could be (our) misuse of the library which >> is >> to blame here! >> >> Thanks >> >> >> Andrew >> >> >> Andrew Mulholland >> Operations Architect >> Expedia Affiliate Network >> p: +44 (0)20 7019 2927 | m: +44 (0)77 1585 4475 >> e: anmulholland at expedia.com >> >> >> >> On 8/12/13 10:57 AM, "Andrew Mulholland" >> wrote: >> >>> Hi Jon >>> >>> >>> On 8/12/13 5:50 AM, "Jon Masamitsu" wrote: >>> >>>> >>>> I saw your response to Bernd where you verified >>>> the number of GC threads that you are using but the >>>> amount of parallelism your getting still doesn't seem to >>>> be right. See below. >>> >>> >>> Thanks - data below as requested: >>> >>>> >>>> , 0.0989930 secs] [Times: user=0.46 sys=0.02, real=0.10 secs] >>>> >>>> >>>> >>>> About a 4 times speed up here (user / real ~ 4.6) >>>> >>> >>> For JVM 1.6.0_43: >>> >>> 90% of ParNew Gcs had parallelism of at least: 5.17 >>> 75% of ParNew Gcs had parallelism of at least: 5.83 >>> 50% of ParNew Gcs had parallelism of at least: 6.71 >>> 25% of ParNew Gcs had parallelism of at least 7.66 >>> 10% of ParNew Gcs had parallelism of at least 8.5 >>> >>> Average parallelism for ParNew Gs: 6.75 (stdev 1.21) >>> Max Parallelism for ParNew GCs: 9 >>> >>> >>>> >>>> , 0.4156130 secs] [Times: user=0.65 sys=0.02, real=0.42 secs] >>>> >>>> >>>> Only about a 1.5 times speed up here (user / real ~1.5). >>> >>> For JVM 1.7.0_25: >>> >>> 90% of ParNew Gcs had parallelism of at least 1.53 >>> >>> 75% of ParNew Gcs had parallelism of at least 1.63 >>> 50% of ParNew Gcs had parallelism of at least 2 >>> 25% of ParNew Gcs had parallelism of at least: 2.72 >>> 10% of ParNew Gcs had parallelism of at least: 5.32 >>> >>> Average parallelism for ParNew GCs: 2.58 (stdev 1.51) >>> >>> Max Parallelism for ParNew GCs: 8.6 >>> >>> >>> >>> >>> >>>> >>>> Can you check some other entries and verify that the amount of >>>> parallelism your seeing on 1.7 is only 1.5. >>>> >>> >>> >>>> From the above it seems that we do get higher than 1.5.. With jvm 1.7 - >>> but it is consistently significantly lower than on jvm 1.6 >>> >>> Thanks >>> >>> >>> Andrew >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > -- > https://plus.google.com/u/1/108084227682171831683/about > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From anmulholland at expedia.com Sat Sep 14 06:39:23 2013 From: anmulholland at expedia.com (Andrew Mulholland) Date: Sat, 14 Sep 2013 14:39:23 +0100 Subject: ParNew 4x slower under 1.7 vs 1.6? In-Reply-To: Message-ID: Hi Doing some more testing, jaxb 2.2.1->2.2.4-1 all perform well too. Since 2.2.4 update 2 (which comes in JDK7 since 7u6) and 2.2.5 etc, we experience poor performance. It is our belief that r3670, which adds a finalizer method to the Unmarshaller (which was an attempted fix for https://java.net/jira/browse/JAXB-831 ) is the change which causes this behavior. We suspect that we're stressing the finalizer, as this app has a reasonable amount of load, dealing with fairly large objects.. Thanks Andrew On 9/13/13 7:40 PM, "Bernd Eckenfels" wrote: >Hello, > >since the different workload from jaxb was very visible in your par-new >times, I wonder if you can also research a bit in that direction: have >been the average objectsize small or larger, has the object graph beeing >deeper or more gc roots. > >I think having a heap dump of both jaxb versions with the same VM would >be >a good start to compare? (I am not sure if there are statistic tools for >the above numbers, but I guess the class histogram can give a good first >answer) > >Greetings >Bernd > > >Am 13.09.2013, 17:53 Uhr, schrieb Andrew Mulholland >: > >> Hi Jon >> >> Many thanks for your help in this. >> >> To close off this thread, Jon provided help to us in investigating this >> issue. >> >> We conducted some tests which demonstrated it wasn't changes in the VM >> causing our slowdown. (Jon directed me to create a sort of hybrid JVM by >> copying the libjvm.so from 1.7.0_25 into a 1.6.0_43 JDK). >> >> We spent out time looking into what was different between 6 and 7 >>outside >> of the JVM... and through a process of elimination identified that the >> change in JAXB version from 2.1 to 2.2 which came with JDK7 was causing >> the change in performance. >> >> By loading the jaxb 2.1 jars into our app, rather than relying on the >> version provided by the JDK, we now see a similar level of performance >> under JDK7 as we did under JDK6. >> >> (also for those interested by loading the jaxb 2.2 jars into our app, we >> see the slowdown under JDK6 too). >> >> >> We're investigating what the differences are between JAXB 2.1 and 2.2 >>and >> how we use it, as suspect it could be (our) misuse of the library which >> >> is >> to blame here! >> >> Thanks >> >> >> Andrew >> >> >> Andrew Mulholland >> Operations Architect >> Expedia Affiliate Network >> p: +44 (0)20 7019 2927 | m: +44 (0)77 1585 4475 >> e: anmulholland at expedia.com >> >> >> >> On 8/12/13 10:57 AM, "Andrew Mulholland" >> wrote: >> >>> Hi Jon >>> >>> >>> On 8/12/13 5:50 AM, "Jon Masamitsu" wrote: >>> >>>> >>>> I saw your response to Bernd where you verified >>>> the number of GC threads that you are using but the >>>> amount of parallelism your getting still doesn't seem to >>>> be right. See below. >>> >>> >>> Thanks - data below as requested: >>> >>>> >>>> , 0.0989930 secs] [Times: user=0.46 sys=0.02, real=0.10 secs] >>>> >>>> >>>> >>>> About a 4 times speed up here (user / real ~ 4.6) >>>> >>> >>> For JVM 1.6.0_43: >>> >>> 90% of ParNew Gcs had parallelism of at least: 5.17 >>> 75% of ParNew Gcs had parallelism of at least: 5.83 >>> 50% of ParNew Gcs had parallelism of at least: 6.71 >>> 25% of ParNew Gcs had parallelism of at least 7.66 >>> 10% of ParNew Gcs had parallelism of at least 8.5 >>> >>> Average parallelism for ParNew Gs: 6.75 (stdev 1.21) >>> Max Parallelism for ParNew GCs: 9 >>> >>> >>>> >>>> , 0.4156130 secs] [Times: user=0.65 sys=0.02, real=0.42 secs] >>>> >>>> >>>> Only about a 1.5 times speed up here (user / real ~1.5). >>> >>> For JVM 1.7.0_25: >>> >>> 90% of ParNew Gcs had parallelism of at least 1.53 >>> >>> 75% of ParNew Gcs had parallelism of at least 1.63 >>> 50% of ParNew Gcs had parallelism of at least 2 >>> 25% of ParNew Gcs had parallelism of at least: 2.72 >>> 10% of ParNew Gcs had parallelism of at least: 5.32 >>> >>> Average parallelism for ParNew GCs: 2.58 (stdev 1.51) >>> >>> Max Parallelism for ParNew GCs: 8.6 >>> >>> >>> >>> >>> >>>> >>>> Can you check some other entries and verify that the amount of >>>> parallelism your seeing on 1.7 is only 1.5. >>>> >>> >>> >>>> From the above it seems that we do get higher than 1.5.. With jvm 1.7 >>>>- >>> but it is consistently significantly lower than on jvm 1.6 >>> >>> Thanks >>> >>> >>> Andrew >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > >-- >https://plus.google.com/u/1/108084227682171831683/about >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From holger.hoffstaette at googlemail.com Sat Sep 14 07:08:26 2013 From: holger.hoffstaette at googlemail.com (=?UTF-8?B?SG9sZ2VyIEhvZmZzdMOkdHRl?=) Date: Sat, 14 Sep 2013 16:08:26 +0200 Subject: ParNew 4x slower under 1.7 vs 1.6? In-Reply-To: References: Message-ID: <52346DDA.7010804@googlemail.com> On 09/14/13 15:39, Andrew Mulholland wrote: > It is our belief that r3670, which adds a finalizer method to the > Unmarshaller (which was an attempted fix for > https://java.net/jira/browse/JAXB-831 ) is the change which causes this > behavior. Are you setting -XX:+ParallelRefProcEnabled ? If not, give it a try. It's off by default, which frequently causes problems with many weak references. Ultimately the problem is that JAXB - like many other IMHO broken libraries - exposes no explicit resource management or lifecycle control mechanisms, and passively relies on mostly nondeterministic GC behaviour. -h From denny.kettwig at werum.de Mon Sep 23 06:15:04 2013 From: denny.kettwig at werum.de (Denny Kettwig) Date: Mon, 23 Sep 2013 13:15:04 +0000 Subject: Understanding TargetSurvivorRatio Message-ID: <6175F8C4FE407D4F830EDA25C27A43172F810DF7@Werum1450.werum.net> Dear all, Currently I try to understand the TargetSurvivorRatio parameter in detail and after several days of research and tests I came to no conclusion. So I try to share my thoughts with you. I tested this parameter in two different ways, setting it to 1 and 99 and then observed the survivor space on the same system actions in both cases. With the results I got out of my tests I got even more confused than before. By setting the parameter to 99, I got a very high initial occupancy of the survivor space (about 99% as expected), after 5 minutes it drops down to about 10%. The other scenario, where the Parameter was set to 1, the occupancy of the survivor space was mostly at %10. Could you please explain me was this parameter does in detail and how it might affect large server applications to increase the performance by setting the value to 90? Here are the parameters I used: set JAVA_OPTS=%JAVA_OPTS% -Xms6g set JAVA_OPTS=%JAVA_OPTS% -Xmx6g set JAVA_OPTS=%JAVA_OPTS% -Xmn1g set JAVA_OPTS=%JAVA_OPTS% -XX:+UseParNewGC set JAVA_OPTS=%JAVA_OPTS% -XX:+UseConcMarkSweepGC set JAVA_OPTS=%JAVA_OPTS% -XX:SurvivorRatio=8 set JAVA_OPTS=%JAVA_OPTS% -XX:TargetSurvivorRatio=1 or 99 set JAVA_OPTS=%JAVA_OPTS% -XX:PermSize=256m set JAVA_OPTS=%JAVA_OPTS% -XX:MaxPermSize=256m set JAVA_OPTS=%JAVA_OPTS% -XX:+ExplicitGCInvokesConcurrent set JAVA_OPTS=%JAVA_OPTS% -XX:+CMSClassUnloadingEnabled set JAVA_OPTS=%JAVA_OPTS% -XX:+UseCompressedOops Kind Regards Denny -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20130923/177c5bef/attachment.html From holger.hoffstaette at googlemail.com Tue Sep 24 03:35:04 2013 From: holger.hoffstaette at googlemail.com (=?UTF-8?B?SG9sZ2VyIEhvZmZzdMOkdHRl?=) Date: Tue, 24 Sep 2013 12:35:04 +0200 Subject: Understanding TargetSurvivorRatio In-Reply-To: <6175F8C4FE407D4F830EDA25C27A43172F810DF7@Werum1450.werum.net> References: <6175F8C4FE407D4F830EDA25C27A43172F810DF7@Werum1450.werum.net> Message-ID: <52416AD8.60503@googlemail.com> On 09/23/13 15:15, Denny Kettwig wrote: > Currently I try to understand the TargetSurvivorRatio parameter in > detail and after several days of research and tests I came to no > conclusion. So I try to share my thoughts with you. There's a decent human-readable explanation on: http://www.techpaste.com/2012/02/java-command-line-options-jvm-performance-improvement/ > With the results I got out of my tests I got even more confused than before. Welcome to CMS Fight Club. :-) Whether testing extreme values (1 or 99) yields useful results is anybody's guess (probably not), but essentially a higher-than default (50%) TSR should prevent more objects from tenuring prematurely, at the cost of increased risk of "bumping your head" when an allocation spike happens and the survivor spaces are full. For smaller heaps (<=8G) IMHO simply decreasing the SurvivorRatio (from its default 8 down to e.g. 4 or 6) is less fragile, since it should be mostly independent of any "guessed" allocation rate and scales with the rest of the system/application, even if somewhat disproportionally. The effect should be roughly the same. The costs may not be. -h