From ryarran1 at yahoo.com Fri Aug 1 08:52:10 2008 From: ryarran1 at yahoo.com (Rhys Yarranton) Date: Fri, 1 Aug 2008 08:52:10 -0700 (PDT) Subject: Minimum permgen size Message-ID: <6375.55375.qm@web54306.mail.re2.yahoo.com> Is there a way to set the minimum permgen size?? (Analogous to setting the max size with -XX:MaxPermSize.) Reason I ask is we have a situation where the permgen is resizing itself at an unfortunate time, causing a >1s VM pause.? We'd like to size it large enough so it never has to resize. Thanks, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080801/0b565452/attachment.html From Jon.Masamitsu at Sun.COM Fri Aug 1 09:13:21 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 01 Aug 2008 09:13:21 -0700 Subject: Minimum permgen size In-Reply-To: <6375.55375.qm@web54306.mail.re2.yahoo.com> References: <6375.55375.qm@web54306.mail.re2.yahoo.com> Message-ID: <48933621.3090000@sun.com> Rhys Yarranton wrote On 08/01/08 08:52,: > Is there a way to set the minimum permgen size? (Analogous to setting > the max size with -XX:MaxPermSize.) > > Reason I ask is we have a situation where the permgen is resizing > itself at an unfortunate time, causing a >1s VM pause. We'd like to > size it large enough so it never has to resize. > > Thanks, r. > > >------------------------------------------------------------------------ > >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > Try using -XX:PermSize=. For example, java_home -XX:PermSize=128m -XX:+PrintGCDetails -version java version "1.7.0-ea" Java(TM) SE Runtime Environment (build 1.7.0-ea-b31) Java HotSpot(TM) Server VM (build 14.0-b01, mixed mode) Heap PSYoungGen total 10752K, used 184K [0xf1000000, 0xf1c00000, 0xfbc00000) eden space 9216K, 2% used [0xf1000000,0xf102e158,0xf1900000) from space 1536K, 0% used [0xf1a80000,0xf1a80000,0xf1c00000) to space 1536K, 0% used [0xf1900000,0xf1900000,0xf1a80000) PSOldGen total 24576K, used 0K [0xdb800000, 0xdd000000, 0xf1000000) object space 24576K, 0% used [0xdb800000,0xdb800000,0xdd000000) PSPermGen total 131072K, used 1484K [0xd3800000, 0xdb800000, 0xdb800000) object space 131072K, 1% used [0xd3800000,0xd3973010,0xdb800000) From mike at mikefinn.com Mon Aug 11 11:47:48 2008 From: mike at mikefinn.com (Mike Finn) Date: Mon, 11 Aug 2008 13:47:48 -0500 Subject: jdk 1.4.2_17 promotion failure (fragmentation?) Message-ID: <48A08954.10504@mikefinn.com> We have a large, long running server application running in Weblogic (using jdk 1.4.2_17) and using CMS. We originally were running in 32-bit mode and were getting promotion failures and concurrent mode failures. When this happened, there was usually a good amount of tenured space free, so we thought that the problem was with fragmentation of the free space in the tenured generation. To fix that, we've tried increasing the heap size. We have kept upping the heap size (moving to 64-bit to do so) and now have a heap size of 14G (see command line options below). The problem persisted and we experimented with lowering the newsize to try to reduce the requirement for contiguous space in tenured space, but we still promotion failures (see log snippet at the end of this email). Is there anything else we can do in regards to tuning (lower the CMSInitiatingOccupancyFraction?) ? Or are we going to have to move to a newer JDK? Or is it still possible that we have a memory leak or some other abberrant program behavior? /j2sdk1.4.2_17/bin/java -server -XX:CMSInitiatingOccupancyFraction=70 -XX:NewSize=320m -XX:MaxNewSize=320m -XX:SurvivorRatio=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:PermSize=96m -XX:MaxPermSize=96m -XX:MaxTenuringThreshold=5 -Xms14336m -Xmx14336m -Xss256k -XX:+HandlePromotionFailure -XX:+PrintTenuringDistribution -d64 265150.609: [GC {Heap before GC invocations=57327: Heap par new generation total 294912K, used 273751K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 35% used [0xfffffffc01000000, 0xfffffffc01b55cf0, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265150.611: [ParNew Desired survivor size 16777216 bytes, new threshold 2 (max 5) - age 1: 11435216 bytes, 11435216 total - age 2: 8139232 bytes, 19574448 total : 273751K->19272K(294912K), 0.2576767 secs] 11950324K->11695845K(14647296K) Heap after GC invocations=57328: Heap par new generation total 294912K, used 19272K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 58% used [0xfffffffbff000000, 0xfffffffc002d21b0, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.2600295 secs] 265152.992: [GC {Heap before GC invocations=57328: Heap par new generation total 294912K, used 281416K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 58% used [0xfffffffbff000000, 0xfffffffc002d21b0, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265152.994: [ParNew (promotion failed) Desired survivor size 16777216 bytes, new threshold 2 (max 5) - age 1: 15741536 bytes, 15741536 total - age 2: 7459336 bytes, 23200872 total : 281416K->281416K(294912K), 2.7625640 secs]265155.757: [CMS265176.917: [CMS-concurrent-preclean: 30.615/36.559 secs] (concurrent mode failure)[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor193] From doug.jones at eds.com Mon Aug 11 12:06:52 2008 From: doug.jones at eds.com (Jones, Doug H) Date: Tue, 12 Aug 2008 07:06:52 +1200 Subject: jdk 1.4.2_17 promotion failure (fragmentation?) In-Reply-To: <48A08954.10504@mikefinn.com> References: <48A08954.10504@mikefinn.com> Message-ID: <027FCB5D4C65CC4CA714042A4EE8CC6B04B2A954@nzprm231.apac.corp.eds.com> Hi Mike, It is likely to be due to fragmentation of tenured (a CMS GC does not compact tenured). Under the 1.4.2 the 'New Generation Guarantee' requires that when a scavenge occurs there is contiguous space available in tenured equal to the size of the New area (5.0 relaxes that to just being enough space available, not necessarily contiguous). The fix is easy: you are half-way there with setting -XX:CMSInitiatingOccupancyFraction=70. However to tell the JVM to take notice of it you also need to add -XX:+UseCMSInitiatingOccupancyOnly, then CMS Collections will always kick in when tenured is approx 70% full. I would suspect that currently they don't until tenured is well above 90% full (1.4.2 is more optimistic I think than 5.0 about its ability to schedule a CMS GC 'JIT with a bit to spare', but conc-mode-failures can still be a problem under 5.0). Doug. -----Original Message----- From: hotspot-gc-use-bounces at openjdk.java.net [mailto:hotspot-gc-use-bounces at openjdk.java.net] On Behalf Of Mike Finn Sent: Tuesday, 12 August 2008 6:48 a.m. To: hotspot-gc-use at openjdk.java.net Subject: jdk 1.4.2_17 promotion failure (fragmentation?) We have a large, long running server application running in Weblogic (using jdk 1.4.2_17) and using CMS. We originally were running in 32-bit mode and were getting promotion failures and concurrent mode failures. When this happened, there was usually a good amount of tenured space free, so we thought that the problem was with fragmentation of the free space in the tenured generation. To fix that, we've tried increasing the heap size. We have kept upping the heap size (moving to 64-bit to do so) and now have a heap size of 14G (see command line options below). The problem persisted and we experimented with lowering the newsize to try to reduce the requirement for contiguous space in tenured space, but we still promotion failures (see log snippet at the end of this email). Is there anything else we can do in regards to tuning (lower the CMSInitiatingOccupancyFraction?) ? Or are we going to have to move to a newer JDK? Or is it still possible that we have a memory leak or some other abberrant program behavior? /j2sdk1.4.2_17/bin/java -server -XX:CMSInitiatingOccupancyFraction=70 -XX:NewSize=320m -XX:MaxNewSize=320m -XX:SurvivorRatio=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:PermSize=96m -XX:MaxPermSize=96m -XX:MaxTenuringThreshold=5 -Xms14336m -Xmx14336m -Xss256k -XX:+HandlePromotionFailure -XX:+PrintTenuringDistribution -d64 265150.609: [GC {Heap before GC invocations=57327: Heap par new generation total 294912K, used 273751K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 35% used [0xfffffffc01000000, 0xfffffffc01b55cf0, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265150.611: [ParNew Desired survivor size 16777216 bytes, new threshold 2 (max 5) - age 1: 11435216 bytes, 11435216 total - age 2: 8139232 bytes, 19574448 total : 273751K->19272K(294912K), 0.2576767 secs] 11950324K->11695845K(14647296K) Heap after GC invocations=57328: Heap par new generation total 294912K, used 19272K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 58% used [0xfffffffbff000000, 0xfffffffc002d21b0, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.2600295 secs] 265152.992: [GC {Heap before GC invocations=57328: Heap par new generation total 294912K, used 281416K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 58% used [0xfffffffbff000000, 0xfffffffc002d21b0, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265152.994: [ParNew (promotion failed) Desired survivor size 16777216 bytes, new threshold 2 (max 5) - age 1: 15741536 bytes, 15741536 total - age 2: 7459336 bytes, 23200872 total : 281416K->281416K(294912K), 2.7625640 secs]265155.757: [CMS265176.917: [CMS-concurrent-preclean: 30.615/36.559 secs] (concurrent mode failure)[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor193] _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From mike at mikefinn.com Mon Aug 11 12:27:34 2008 From: mike at mikefinn.com (Mike Finn) Date: Mon, 11 Aug 2008 14:27:34 -0500 Subject: jdk 1.4.2_17 promotion failure (fragmentation?) In-Reply-To: <027FCB5D4C65CC4CA714042A4EE8CC6B04B2A954@nzprm231.apac.corp.eds.com> References: <48A08954.10504@mikefinn.com> <027FCB5D4C65CC4CA714042A4EE8CC6B04B2A954@nzprm231.apac.corp.eds.com> Message-ID: <48A092A6.6050000@mikefinn.com> Thanks for the reply. We had considered that, but, if I'm reading our logs correctly, it seems that CMS is started pretty close to 70 every time: ( here's a snippet of our CMS initial marks) 144975.040: [GC [1 CMS-initial-mark: 10047659K(14352384K)] 10064265K(14647296K), 0.1464111 secs] 147242.326: [GC [1 CMS-initial-mark: 10057190K(14352384K)] 10081419K(14647296K), 0.1732362 secs] 149781.334: [GC [1 CMS-initial-mark: 10050150K(14352384K)] 10063772K(14647296K), 0.1149508 secs] 155416.122: [GC [1 CMS-initial-mark: 10048398K(14352384K)] 10065437K(14647296K), 0.1324914 secs] 162762.754: [GC [1 CMS-initial-mark: 10047344K(14352384K)] 10063850K(14647296K), 0.1310949 secs] Jones, Doug H wrote: > Hi Mike, > > It is likely to be due to fragmentation of tenured (a CMS GC does not > compact tenured). Under the 1.4.2 the 'New Generation Guarantee' > requires that when a scavenge occurs there is contiguous space available > in tenured equal to the size of the New area (5.0 relaxes that to just > being enough space available, not necessarily contiguous). > > The fix is easy: you are half-way there with setting > -XX:CMSInitiatingOccupancyFraction=70. However to tell the JVM to take > notice of it you also need to add -XX:+UseCMSInitiatingOccupancyOnly, > then CMS Collections will always kick in when tenured is approx 70% > full. I would suspect that currently they don't until tenured is well > above 90% full (1.4.2 is more optimistic I think than 5.0 about its > ability to schedule a CMS GC 'JIT with a bit to spare', but > conc-mode-failures can still be a problem under 5.0). > > Doug. > > > > > From Y.S.Ramakrishna at Sun.COM Mon Aug 11 16:02:38 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Mon, 11 Aug 2008 16:02:38 -0700 Subject: jdk 1.4.2_17 promotion failure (fragmentation?) In-Reply-To: <48A08954.10504@mikefinn.com> References: <48A08954.10504@mikefinn.com> Message-ID: It's almost certainly fragmentation, although the extent of fragmentation appears quite excessive. You have roughly 2.9 GB of free space in the old generation at the point at which the promotion failed. The scavenge preceding the one in which the promotion failure occurred promoted almost nothing to the old generation, so the expectation is that the failing scavenge would also try to promote a pretty small amount of data. Thus, it would seem that the 2.9 GB of free space in the old generation must be excessively fragmented not to be able to absorb even that small amount of promotion. Frankly, I am surprised at this behaviour. Is there something inherent in your application that causes the sizes and lifetimes of your longer-lived objects to vary a lot over time? Can you share with us a longer GC log and perhaps some data as to the frequency of such promotion failure events? I'll cintact you off-line for the GC logs. -- ramki ----- Original Message ----- From: Mike Finn Date: Monday, August 11, 2008 11:48 am Subject: jdk 1.4.2_17 promotion failure (fragmentation?) To: hotspot-gc-use at openjdk.java.net > We have a large, long running server application running in Weblogic > (using jdk 1.4.2_17) and using CMS. > > We originally were running in 32-bit mode and were getting promotion > failures and concurrent mode failures. When this happened, there was > usually a good amount of tenured space free, so we thought that the > problem was with fragmentation of the free space in the tenured generation. > > To fix that, we've tried increasing the heap size. We have kept upping > > the heap size (moving to 64-bit to do so) and now have a heap size of > > 14G (see command line options below). The problem persisted and we > experimented with lowering the newsize to try to reduce the > requirement > for contiguous space in tenured space, but we still promotion failures > > (see log snippet at the end of this email). > > Is there anything else we can do in regards to tuning (lower the > CMSInitiatingOccupancyFraction?) ? Or are we going to have to move to > a > newer JDK? Or is it still possible that we have a memory leak or some > > other abberrant program behavior? > > /j2sdk1.4.2_17/bin/java -server -XX:CMSInitiatingOccupancyFraction=70 > > -XX:NewSize=320m -XX:MaxNewSize=320m -XX:SurvivorRatio=8 > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:PermSize=96m > -XX:MaxPermSize=96m -XX:MaxTenuringThreshold=5 -Xms14336m -Xmx14336m > -Xss256k -XX:+HandlePromotionFailure -XX:+PrintTenuringDistribution -d64 > > > > 265150.609: [GC {Heap before GC invocations=57327: > Heap > par new generation total 294912K, used 273751K [0xfffffffbef000000, > > 0xfffffffc03000000, 0xfffffffc03000000) > eden space 262144K, 100% used [0xfffffffbef000000, > 0xfffffffbff000000, > 0xfffffffbff000000) > from space 32768K, 35% used [0xfffffffc01000000, > 0xfffffffc01b55cf0, > 0xfffffffc03000000) > to space 32768K, 0% used [0xfffffffbff000000, > 0xfffffffbff000000, > 0xfffffffc01000000) > concurrent mark-sweep generation total 14352384K, used 11676573K > [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) > concurrent-mark-sweep perm gen total 98304K, used 67646K > [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) > 265150.611: [ParNew > Desired survivor size 16777216 bytes, new threshold 2 (max 5) > - age 1: 11435216 bytes, 11435216 total > - age 2: 8139232 bytes, 19574448 total > : 273751K->19272K(294912K), 0.2576767 secs] > 11950324K->11695845K(14647296K) Heap after GC invocations=57328: > Heap > par new generation total 294912K, used 19272K [0xfffffffbef000000, > > 0xfffffffc03000000, 0xfffffffc03000000) > eden space 262144K, 0% used [0xfffffffbef000000, > 0xfffffffbef000000, > 0xfffffffbff000000) > from space 32768K, 58% used [0xfffffffbff000000, > 0xfffffffc002d21b0, > 0xfffffffc01000000) > to space 32768K, 0% used [0xfffffffc01000000, > 0xfffffffc01000000, > 0xfffffffc03000000) > concurrent mark-sweep generation total 14352384K, used 11676573K > [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) > concurrent-mark-sweep perm gen total 98304K, used 67646K > [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) > } , 0.2600295 secs] > 265152.992: [GC {Heap before GC invocations=57328: > Heap > par new generation total 294912K, used 281416K [0xfffffffbef000000, > > 0xfffffffc03000000, 0xfffffffc03000000) > eden space 262144K, 100% used [0xfffffffbef000000, > 0xfffffffbff000000, > 0xfffffffbff000000) > from space 32768K, 58% used [0xfffffffbff000000, > 0xfffffffc002d21b0, > 0xfffffffc01000000) > to space 32768K, 0% used [0xfffffffc01000000, > 0xfffffffc01000000, > 0xfffffffc03000000) > concurrent mark-sweep generation total 14352384K, used 11676573K > [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) > concurrent-mark-sweep perm gen total 98304K, used 67646K > [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) > 265152.994: [ParNew (promotion failed) > Desired survivor size 16777216 bytes, new threshold 2 (max 5) > - age 1: 15741536 bytes, 15741536 total > - age 2: 7459336 bytes, 23200872 total > : 281416K->281416K(294912K), 2.7625640 secs]265155.757: > [CMS265176.917: > [CMS-concurrent-preclean: 30.615/36.559 secs] > (concurrent mode failure)[Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor193] > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From mike at mikefinn.com Mon Aug 11 20:09:07 2008 From: mike at mikefinn.com (Mike Finn) Date: Mon, 11 Aug 2008 22:09:07 -0500 Subject: jdk 1.4.2_17 promotion failure (fragmentation?) In-Reply-To: <027FCB5D4C65CC4CA714042A4EE8CC6B04B2AA74@nzprm231.apac.corp.eds.com> References: <48A08954.10504@mikefinn.com> <027FCB5D4C65CC4CA714042A4EE8CC6B04B2A954@nzprm231.apac.corp.eds.com> <48A092A6.6050000@mikefinn.com> <027FCB5D4C65CC4CA714042A4EE8CC6B04B2AA74@nzprm231.apac.corp.eds.com> Message-ID: <48A0FED3.9070706@mikefinn.com> Here is a bit more of the log before the failure. We have a process monitor that restarts the server if it becomes unresponsive, so the STW GC usually doesn't log out its completion, because the server has been restarted. The promotion failure/concurrent mode failure usually happens after running for several days, but it has happened after only running a few hours. 265131.419: [GC {Heap before GC invocations=57320: Heap par new generation total 294912K, used 285715K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 71% used [0xfffffffbff000000, 0xfffffffc00704cc8, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11590395K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265131.421: [ParNew Desired survivor size 16777216 bytes, new threshold 1 (max 5) - age 1: 23454432 bytes, 23454432 total - age 2: 8152032 bytes, 31606464 total : 285715K->31114K(294912K), 0.3236327 secs] 11876110K->11629279K(14647296K) Heap after GC invocations=57321: Heap par new generation total 294912K, used 31114K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 94% used [0xfffffffc01000000, 0xfffffffc02e62938, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11598165K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.3262163 secs] 265134.554: [GC {Heap before GC invocations=57321: Heap par new generation total 294912K, used 293258K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 94% used [0xfffffffc01000000, 0xfffffffc02e62938, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11598165K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265134.556: [ParNew Desired survivor size 16777216 bytes, new threshold 1 (max 5) - age 1: 17172968 bytes, 17172968 total : 293258K->16901K(294912K), 0.4933607 secs] 11891423K->11639137K(14647296K) Heap after GC invocations=57322: Heap par new generation total 294912K, used 16901K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 51% used [0xfffffffbff000000, 0xfffffffc00081530, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11622236K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.4961586 secs] 265137.224: [GC {Heap before GC invocations=57322: Heap par new generation total 294912K, used 279045K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 51% used [0xfffffffbff000000, 0xfffffffc00081530, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11622236K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265137.226: [ParNew Desired survivor size 16777216 bytes, new threshold 5 (max 5) - age 1: 11870696 bytes, 11870696 total : 279045K->11695K(294912K), 0.3247989 secs] 11901281K->11645944K(14647296K) Heap after GC invocations=57323: Heap par new generation total 294912K, used 11695K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 35% used [0xfffffffc01000000, 0xfffffffc01b6bda8, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11634249K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.3275647 secs] 265139.683: [GC {Heap before GC invocations=57323: Heap par new generation total 294912K, used 273839K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 35% used [0xfffffffc01000000, 0xfffffffc01b6bda8, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11634249K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265139.686: [ParNew Desired survivor size 16777216 bytes, new threshold 2 (max 5) - age 1: 15240960 bytes, 15240960 total - age 2: 8310352 bytes, 23551312 total : 273839K->23159K(294912K), 0.2782267 secs] 11908088K->11657408K(14647296K) Heap after GC invocations=57324: Heap par new generation total 294912K, used 23159K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 70% used [0xfffffffbff000000, 0xfffffffc0069ddd8, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11634249K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.2810588 secs] 265140.358: [CMS-concurrent-mark: 371.391/427.845 secs] 265140.358: [CMS-concurrent-preclean-start] 265142.278: [GC {Heap before GC invocations=57324: Heap par new generation total 294912K, used 284950K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 99% used [0xfffffffbef000000, 0xfffffffbfefa7bf0, 0xfffffffbff000000) from space 32768K, 70% used [0xfffffffbff000000, 0xfffffffc0069ddd8, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11634249K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265142.280: [ParNew Desired survivor size 16777216 bytes, new threshold 2 (max 5) - age 1: 13342736 bytes, 13342736 total - age 2: 10303256 bytes, 23645992 total : 284950K->23261K(294912K), 0.3157337 secs] 11919199K->11664877K(14647296K) Heap after GC invocations=57325: Heap par new generation total 294912K, used 23261K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 70% used [0xfffffffc01000000, 0xfffffffc026b7648, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11641615K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.3180824 secs] 265145.240: [GC {Heap before GC invocations=57325: Heap par new generation total 294912K, used 285405K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 70% used [0xfffffffc01000000, 0xfffffffc026b7648, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11641615K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265145.242: [ParNew Desired survivor size 16777216 bytes, new threshold 1 (max 5) - age 1: 21400416 bytes, 21400416 total - age 2: 8054280 bytes, 29454696 total : 285405K->28990K(294912K), 0.3531017 secs] 11927021K->11680648K(14647296K) Heap after GC invocations=57326: Heap par new generation total 294912K, used 28990K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 88% used [0xfffffffbff000000, 0xfffffffc00c4fa18, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11651657K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.3555513 secs] 265147.633: [GC {Heap before GC invocations=57326: Heap par new generation total 294912K, used 291134K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 88% used [0xfffffffbff000000, 0xfffffffc00c4fa18, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11651657K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265147.635: [ParNew Desired survivor size 16777216 bytes, new threshold 5 (max 5) - age 1: 11786104 bytes, 11786104 total : 291134K->11607K(294912K), 0.4969958 secs] 11942792K->11688180K(14647296K) Heap after GC invocations=57327: Heap par new generation total 294912K, used 11607K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 35% used [0xfffffffc01000000, 0xfffffffc01b55cf0, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.4998969 secs] 265150.609: [GC {Heap before GC invocations=57327: Heap par new generation total 294912K, used 273751K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 35% used [0xfffffffc01000000, 0xfffffffc01b55cf0, 0xfffffffc03000000) to space 32768K, 0% used [0xfffffffbff000000, 0xfffffffbff000000, 0xfffffffc01000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265150.611: [ParNew Desired survivor size 16777216 bytes, new threshold 2 (max 5) - age 1: 11435216 bytes, 11435216 total - age 2: 8139232 bytes, 19574448 total : 273751K->19272K(294912K), 0.2576767 secs] 11950324K->11695845K(14647296K) Heap after GC invocations=57328: Heap par new generation total 294912K, used 19272K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, 0xfffffffbff000000) from space 32768K, 58% used [0xfffffffbff000000, 0xfffffffc002d21b0, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , 0.2600295 secs] 265152.992: [GC {Heap before GC invocations=57328: Heap par new generation total 294912K, used 281416K [0xfffffffbef000000, 0xfffffffc03000000, 0xfffffffc03000000) eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, 0xfffffffbff000000) from space 32768K, 58% used [0xfffffffbff000000, 0xfffffffc002d21b0, 0xfffffffc01000000) to space 32768K, 0% used [0xfffffffc01000000, 0xfffffffc01000000, 0xfffffffc03000000) concurrent mark-sweep generation total 14352384K, used 11676573K [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) concurrent-mark-sweep perm gen total 98304K, used 67646K [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) 265152.994: [ParNew (promotion failed) Desired survivor size 16777216 bytes, new threshold 2 (max 5) - age 1: 15741536 bytes, 15741536 total - age 2: 7459336 bytes, 23200872 total : 281416K->281416K(294912K), 2.7625640 secs]265155.757: [CMS265176.917: [CMS-concurrent-preclean: 30.615/36.559 secs] (concurrent mode failure)[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor193] [Unloading class sun.reflect.GeneratedConstructorAccessor127] [Unloading class sun.reflect.GeneratedMethodAccessor54] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor166] [Unloading class sun.reflect.GeneratedMethodAccessor100] Jones, Doug H wrote: > That's odd ... could you send more of the GC details from around the > conc-mode-failure, say some ParNew's just prior to it, and the start of > the CMS GC, then through to the end of the CMS GC sequence. > > Thanks, > Doug. > > -----Original Message----- > From: Mike Finn [mailto:mike at mikefinn.com] > Sent: Tuesday, 12 August 2008 7:28 a.m. > To: hotspot-gc-use at openjdk.java.net > Cc: Jones, Doug H > Subject: Re: jdk 1.4.2_17 promotion failure (fragmentation?) > > Thanks for the reply. We had considered that, but, if I'm reading our > logs correctly, it seems that CMS is started pretty close to 70 every > time: ( here's a snippet of our CMS initial marks) > > 144975.040: [GC [1 CMS-initial-mark: 10047659K(14352384K)] > 10064265K(14647296K), 0.1464111 secs] > 147242.326: [GC [1 CMS-initial-mark: 10057190K(14352384K)] > 10081419K(14647296K), 0.1732362 secs] > 149781.334: [GC [1 CMS-initial-mark: 10050150K(14352384K)] > 10063772K(14647296K), 0.1149508 secs] > 155416.122: [GC [1 CMS-initial-mark: 10048398K(14352384K)] > 10065437K(14647296K), 0.1324914 secs] > 162762.754: [GC [1 CMS-initial-mark: 10047344K(14352384K)] > 10063850K(14647296K), 0.1310949 secs] > > > Jones, Doug H wrote: > >> Hi Mike, >> >> It is likely to be due to fragmentation of tenured (a CMS GC does not >> compact tenured). Under the 1.4.2 the 'New Generation Guarantee' >> requires that when a scavenge occurs there is contiguous space >> available in tenured equal to the size of the New area (5.0 relaxes >> that to just being enough space available, not necessarily >> > contiguous). > >> The fix is easy: you are half-way there with setting >> -XX:CMSInitiatingOccupancyFraction=70. However to tell the JVM to take >> > > >> notice of it you also need to add -XX:+UseCMSInitiatingOccupancyOnly, >> then CMS Collections will always kick in when tenured is approx 70% >> full. I would suspect that currently they don't until tenured is well >> above 90% full (1.4.2 is more optimistic I think than 5.0 about its >> ability to schedule a CMS GC 'JIT with a bit to spare', but >> conc-mode-failures can still be a problem under 5.0). >> >> Doug. >> >> >> >> >> >> > > > From Y.S.Ramakrishna at Sun.COM Tue Aug 12 18:08:07 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Tue, 12 Aug 2008 18:08:07 -0700 Subject: jdk 1.4.2_17 promotion failure (fragmentation?) In-Reply-To: <027FCB5D4C65CC4CA714042A4EE8CC6B04B2A954@nzprm231.apac.corp.eds.com> References: <48A08954.10504@mikefinn.com> <027FCB5D4C65CC4CA714042A4EE8CC6B04B2A954@nzprm231.apac.corp.eds.com> Message-ID: Hi Doug -- You are mostly right about the differnce regarding promotion failure and the "full promotion guarantee" between 1.4.2 and 5.0. However, recently some of the 5.0 code to relax the full promotion guarantee, kick off CMS collections "ergonomically" and dealing with unexpected promotion failure was backported to 1.4.2 (can;t recall which version). So the version 1.4.2_17 that Mike is running does behave somewhat like 5.0 in that respect. It also seems from the GC snippet that a scavenge was attempted but bailed midway (or at least so it would seem from the fact that the scavenge time reports at 2.76 seconds or so -- quite likely because of the bail-out and recovery phase of a failed scavenge). We'll have to look to see why the heap might have gotten so excessively fragmented, based on the fuller, more detailed logs that Mike has provided. -- ramki > It is likely to be due to fragmentation of tenured (a CMS GC does not > compact tenured). Under the 1.4.2 the 'New Generation Guarantee' > requires that when a scavenge occurs there is contiguous space available > in tenured equal to the size of the New area (5.0 relaxes that to just > being enough space available, not necessarily contiguous). > > The fix is easy: you are half-way there with setting > -XX:CMSInitiatingOccupancyFraction=70. However to tell the JVM to take > notice of it you also need to add -XX:+UseCMSInitiatingOccupancyOnly, > then CMS Collections will always kick in when tenured is approx 70% > full. I would suspect that currently they don't until tenured is well > above 90% full (1.4.2 is more optimistic I think than 5.0 about its > ability to schedule a CMS GC 'JIT with a bit to spare', but > conc-mode-failures can still be a problem under 5.0). > > Doug. > > > -----Original Message----- > From: hotspot-gc-use-bounces at openjdk.java.net > [mailto:hotspot-gc-use-bounces at openjdk.java.net] On Behalf Of Mike Finn > Sent: Tuesday, 12 August 2008 6:48 a.m. > To: hotspot-gc-use at openjdk.java.net > Subject: jdk 1.4.2_17 promotion failure (fragmentation?) > > We have a large, long running server application running in Weblogic > (using jdk 1.4.2_17) and using CMS. > > We originally were running in 32-bit mode and were getting promotion > failures and concurrent mode failures. When this happened, there was > usually a good amount of tenured space free, so we thought that the > problem was with fragmentation of the free space in the tenured > generation. > > To fix that, we've tried increasing the heap size. We have kept upping > the heap size (moving to 64-bit to do so) and now have a heap size of > 14G (see command line options below). The problem persisted and we > experimented with lowering the newsize to try to reduce the requirement > for contiguous space in tenured space, but we still promotion failures > (see log snippet at the end of this email). > > Is there anything else we can do in regards to tuning (lower the > CMSInitiatingOccupancyFraction?) ? Or are we going to have to move to > a > newer JDK? Or is it still possible that we have a memory leak or some > other abberrant program behavior? > > /j2sdk1.4.2_17/bin/java -server -XX:CMSInitiatingOccupancyFraction=70 > -XX:NewSize=320m -XX:MaxNewSize=320m -XX:SurvivorRatio=8 > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:PermSize=96m > -XX:MaxPermSize=96m -XX:MaxTenuringThreshold=5 -Xms14336m -Xmx14336m > -Xss256k -XX:+HandlePromotionFailure -XX:+PrintTenuringDistribution -d64 > > > > 265150.609: [GC {Heap before GC invocations=57327: > Heap > par new generation total 294912K, used 273751K [0xfffffffbef000000, > > 0xfffffffc03000000, 0xfffffffc03000000) > eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, > 0xfffffffbff000000) > from space 32768K, 35% used [0xfffffffc01000000, 0xfffffffc01b55cf0, > 0xfffffffc03000000) > to space 32768K, 0% used [0xfffffffbff000000, > 0xfffffffbff000000, > 0xfffffffc01000000) > concurrent mark-sweep generation total 14352384K, used 11676573K > [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) > concurrent-mark-sweep perm gen total 98304K, used 67646K > [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) > 265150.611: [ParNew > Desired survivor size 16777216 bytes, new threshold 2 (max 5) > - age 1: 11435216 bytes, 11435216 total > - age 2: 8139232 bytes, 19574448 total > : 273751K->19272K(294912K), 0.2576767 secs] > 11950324K->11695845K(14647296K) Heap after GC invocations=57328: > Heap > par new generation total 294912K, used 19272K [0xfffffffbef000000, > > 0xfffffffc03000000, 0xfffffffc03000000) > eden space 262144K, 0% used [0xfffffffbef000000, 0xfffffffbef000000, > > 0xfffffffbff000000) > from space 32768K, 58% used [0xfffffffbff000000, 0xfffffffc002d21b0, > 0xfffffffc01000000) > to space 32768K, 0% used [0xfffffffc01000000, > 0xfffffffc01000000, > 0xfffffffc03000000) > concurrent mark-sweep generation total 14352384K, used 11676573K > [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) > concurrent-mark-sweep perm gen total 98304K, used 67646K > [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) } , > 0.2600295 secs] > 265152.992: [GC {Heap before GC invocations=57328: > Heap > par new generation total 294912K, used 281416K [0xfffffffbef000000, > > 0xfffffffc03000000, 0xfffffffc03000000) > eden space 262144K, 100% used [0xfffffffbef000000, 0xfffffffbff000000, > 0xfffffffbff000000) > from space 32768K, 58% used [0xfffffffbff000000, 0xfffffffc002d21b0, > 0xfffffffc01000000) > to space 32768K, 0% used [0xfffffffc01000000, > 0xfffffffc01000000, > 0xfffffffc03000000) > concurrent mark-sweep generation total 14352384K, used 11676573K > [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) > concurrent-mark-sweep perm gen total 98304K, used 67646K > [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) > 265152.994: [ParNew (promotion failed) > Desired survivor size 16777216 bytes, new threshold 2 (max 5) > - age 1: 15741536 bytes, 15741536 total > - age 2: 7459336 bytes, 23200872 total > : 281416K->281416K(294912K), 2.7625640 secs]265155.757: > [CMS265176.917: > [CMS-concurrent-preclean: 30.615/36.559 secs] (concurrent mode > failure)[Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor193] > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Keith.Holdaway at sas.com Thu Aug 7 14:17:58 2008 From: Keith.Holdaway at sas.com (Keith Holdaway) Date: Thu, 7 Aug 2008 17:17:58 -0400 Subject: G1 In-Reply-To: <4886336C.1060403@Sun.COM> References: <4886336C.1060403@Sun.COM> Message-ID: <304E9E55F6A4BE4B910C2437D4D1B4960B22E25483@MERCMBX14.na.sas.com> In which update of JDK 6.0 will G1 be included? Keith R Holdaway Java Development Technologies SAS The Power to Know Carpe Diem From Keith.Holdaway at sas.com Mon Aug 11 18:27:39 2008 From: Keith.Holdaway at sas.com (Keith Holdaway) Date: Mon, 11 Aug 2008 21:27:39 -0400 Subject: jdk 1.4.2_17 promotion failure (fragmentation?) In-Reply-To: References: <48A08954.10504@mikefinn.com>, Message-ID: <304E9E55F6A4BE4B910C2437D4D1B4960B2210653C@MERCMBX14.na.sas.com> I think I witnessed something very similar with our middle-tier apps in JBoss on 64 bit Windows try using the incremental CMS iCMS mode. This effectively resolved our situation. This is tantamount to reducing the CMS occupancy fraction in some ways. keith ________________________________________ From: hotspot-gc-dev-bounces at openjdk.java.net [hotspot-gc-dev-bounces at openjdk.java.net] On Behalf Of Y Srinivas Ramakrishna [Y.S.Ramakrishna at Sun.COM] Sent: Monday, August 11, 2008 7:02 PM To: Mike Finn Cc: hotspot-gc-use at openjdk.java.net Subject: Re: jdk 1.4.2_17 promotion failure (fragmentation?) It's almost certainly fragmentation, although the extent of fragmentation appears quite excessive. You have roughly 2.9 GB of free space in the old generation at the point at which the promotion failed. The scavenge preceding the one in which the promotion failure occurred promoted almost nothing to the old generation, so the expectation is that the failing scavenge would also try to promote a pretty small amount of data. Thus, it would seem that the 2.9 GB of free space in the old generation must be excessively fragmented not to be able to absorb even that small amount of promotion. Frankly, I am surprised at this behaviour. Is there something inherent in your application that causes the sizes and lifetimes of your longer-lived objects to vary a lot over time? Can you share with us a longer GC log and perhaps some data as to the frequency of such promotion failure events? I'll cintact you off-line for the GC logs. -- ramki ----- Original Message ----- From: Mike Finn Date: Monday, August 11, 2008 11:48 am Subject: jdk 1.4.2_17 promotion failure (fragmentation?) To: hotspot-gc-use at openjdk.java.net > We have a large, long running server application running in Weblogic > (using jdk 1.4.2_17) and using CMS. > > We originally were running in 32-bit mode and were getting promotion > failures and concurrent mode failures. When this happened, there was > usually a good amount of tenured space free, so we thought that the > problem was with fragmentation of the free space in the tenured generation. > > To fix that, we've tried increasing the heap size. We have kept upping > > the heap size (moving to 64-bit to do so) and now have a heap size of > > 14G (see command line options below). The problem persisted and we > experimented with lowering the newsize to try to reduce the > requirement > for contiguous space in tenured space, but we still promotion failures > > (see log snippet at the end of this email). > > Is there anything else we can do in regards to tuning (lower the > CMSInitiatingOccupancyFraction?) ? Or are we going to have to move to > a > newer JDK? Or is it still possible that we have a memory leak or some > > other abberrant program behavior? > > /j2sdk1.4.2_17/bin/java -server -XX:CMSInitiatingOccupancyFraction=70 > > -XX:NewSize=320m -XX:MaxNewSize=320m -XX:SurvivorRatio=8 > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:PermSize=96m > -XX:MaxPermSize=96m -XX:MaxTenuringThreshold=5 -Xms14336m -Xmx14336m > -Xss256k -XX:+HandlePromotionFailure -XX:+PrintTenuringDistribution -d64 > > > > 265150.609: [GC {Heap before GC invocations=57327: > Heap > par new generation total 294912K, used 273751K [0xfffffffbef000000, > > 0xfffffffc03000000, 0xfffffffc03000000) > eden space 262144K, 100% used [0xfffffffbef000000, > 0xfffffffbff000000, > 0xfffffffbff000000) > from space 32768K, 35% used [0xfffffffc01000000, > 0xfffffffc01b55cf0, > 0xfffffffc03000000) > to space 32768K, 0% used [0xfffffffbff000000, > 0xfffffffbff000000, > 0xfffffffc01000000) > concurrent mark-sweep generation total 14352384K, used 11676573K > [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) > concurrent-mark-sweep perm gen total 98304K, used 67646K > [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) > 265150.611: [ParNew > Desired survivor size 16777216 bytes, new threshold 2 (max 5) > - age 1: 11435216 bytes, 11435216 total > - age 2: 8139232 bytes, 19574448 total > : 273751K->19272K(294912K), 0.2576767 secs] > 11950324K->11695845K(14647296K) Heap after GC invocations=57328: > Heap > par new generation total 294912K, used 19272K [0xfffffffbef000000, > > 0xfffffffc03000000, 0xfffffffc03000000) > eden space 262144K, 0% used [0xfffffffbef000000, > 0xfffffffbef000000, > 0xfffffffbff000000) > from space 32768K, 58% used [0xfffffffbff000000, > 0xfffffffc002d21b0, > 0xfffffffc01000000) > to space 32768K, 0% used [0xfffffffc01000000, > 0xfffffffc01000000, > 0xfffffffc03000000) > concurrent mark-sweep generation total 14352384K, used 11676573K > [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) > concurrent-mark-sweep perm gen total 98304K, used 67646K > [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) > } , 0.2600295 secs] > 265152.992: [GC {Heap before GC invocations=57328: > Heap > par new generation total 294912K, used 281416K [0xfffffffbef000000, > > 0xfffffffc03000000, 0xfffffffc03000000) > eden space 262144K, 100% used [0xfffffffbef000000, > 0xfffffffbff000000, > 0xfffffffbff000000) > from space 32768K, 58% used [0xfffffffbff000000, > 0xfffffffc002d21b0, > 0xfffffffc01000000) > to space 32768K, 0% used [0xfffffffc01000000, > 0xfffffffc01000000, > 0xfffffffc03000000) > concurrent mark-sweep generation total 14352384K, used 11676573K > [0xfffffffc03000000, 0xffffffff6f000000, 0xffffffff6f000000) > concurrent-mark-sweep perm gen total 98304K, used 67646K > [0xffffffff6f000000, 0xffffffff75000000, 0xffffffff75000000) > 265152.994: [ParNew (promotion failed) > Desired survivor size 16777216 bytes, new threshold 2 (max 5) > - age 1: 15741536 bytes, 15741536 total > - age 2: 7459336 bytes, 23200872 total > : 281416K->281416K(294912K), 2.7625640 secs]265155.757: > [CMS265176.917: > [CMS-concurrent-preclean: 30.615/36.559 secs] > (concurrent mode failure)[Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor193] > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Jon.Masamitsu at Sun.COM Wed Aug 13 11:38:49 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Wed, 13 Aug 2008 11:38:49 -0700 Subject: G1 In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B4960B22E25483@MERCMBX14.na.sas.com> References: <4886336C.1060403@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B4960B22E25483@MERCMBX14.na.sas.com> Message-ID: <48A32A39.305@Sun.COM> Keith, The hotspot binaries for jdk6 update 12 will include G1. There is nothing official about G1's inclusion in jdk6u12 but that's how it's looking right now. In jdk6u12 G1 will be an experimental collector. I don't think a decision has been made regarding when it will be a supported collector in a jdk6 update. That likely depends on feedback we get from users who try it. Jon On 08/07/08 14:17, Keith Holdaway wrote: > In which update of JDK 6.0 will G1 be included? > > Keith R Holdaway > Java Development Technologies > > SAS The Power to Know > > Carpe Diem > From tony.printezis at sun.com Wed Aug 13 12:07:31 2008 From: tony.printezis at sun.com (Tony Printezis) Date: Wed, 13 Aug 2008 15:07:31 -0400 Subject: Who hates the *Ratio parameters? Message-ID: <48A330F3.4070509@sun.com> Hi all, I personally don't like the *Ratio parameters (e.g., -XX:SurvivorRatio=) as I don't think they are very intuitive to set. I've heard the same from a few customers too. Would most people prefer parameters based on percentages (e.g., -XX:SurvivorPerc=, where 0 <= <= 100) instead? Tony -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From michael.finocchiaro at gmail.com Wed Aug 13 12:26:09 2008 From: michael.finocchiaro at gmail.com (Michael Finocchiaro) Date: Wed, 13 Aug 2008 21:26:09 +0200 Subject: Who hates the *Ratio parameters? In-Reply-To: <48A330F3.4070509@sun.com> References: <48A330F3.4070509@sun.com> Message-ID: <3487B6DA-5CA9-443D-AC74-118ED3B1A81D@gmail.com> I don't mind the ratio but there is some inconsistency perhaps since some things are ratios and some are percent (the defunct - XX:MaxLiveObjectEvacuationRatio for example). I kinda got used to the ratio but could understand that it is not easy to calculate. Perhaps rather than being the ratio of To to Eden, it could be given as the To +From as a portion of New or else as you said, a percentage. Problem though is that CMS automagically sets SurvivorRatio=1024 and that particular number would be hard to program into a 2 digit percentage... My 2 cents, Fino On Aug 13, 2008, at 9:07 PM, Tony Printezis wrote: > Hi all, > > I personally don't like the *Ratio parameters (e.g., > -XX:SurvivorRatio=) as I don't think they are very intuitive to > set. > I've heard the same from a few customers too. Would most people prefer > parameters based on percentages (e.g., -XX:SurvivorPerc=, where 0 > <= > <= 100) instead? > > Tony > > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | > | | MS BUR02-311 | > | e-mail: tony.printezis at sun.com | 35 Network Drive | > | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From tony.printezis at sun.com Wed Aug 13 12:29:32 2008 From: tony.printezis at sun.com (Tony Printezis) Date: Wed, 13 Aug 2008 15:29:32 -0400 Subject: Who hates the *Ratio parameters? In-Reply-To: <48A33279.4050405@sun.com> References: <48A330F3.4070509@sun.com> <48A33279.4050405@sun.com> Message-ID: <48A3361C.3070105@sun.com> Paul, Paul Hohensee wrote: > I would. I'd use "SurvivorPercent" though, not "SurvivorPerc". The > latter > sounds like a bonus you get for surviving. :) I was trying to be a be concise, but sure Percent is fine. > Or maybe just add "SurvivorSize" and "MaxSurvivorSize", like we have > "NewSize" and "MaxNewSize". That'd be exact, unlike either a percent > or a ratio. Well, the problem with using specific sizes is that, if you resize your young gen (in this case), the survivor size will also have to change. Maybe, using percentages will handle that case a bit better (even though I can think of some cases when you want to fix the survivor size but maybe vary the eden size). > btw, we seem to use the term "Ratio" to mean both a genuine ratio _and_ a > percent. E.g., "SurvivorRatio" is a genuine ratio, but > "MinHeapFreeRatio" > is a percent. Yep. Unfortunately, we cannot change the meaning of MinHeapFreeRatio (to keep backwards compatibility). So, maybe, it'd be best to introduce a new parameter, say MinHeapFreePercent, with the same semantics as MinHeapFreeRatio, as a way to start migrating users to the correct one. Tony > Tony Printezis wrote: >> Hi all, >> >> I personally don't like the *Ratio parameters (e.g., >> -XX:SurvivorRatio=) as I don't think they are very intuitive to >> set. I've heard the same from a few customers too. Would most people >> prefer parameters based on percentages (e.g., -XX:SurvivorPerc=, >> where 0 <= <= 100) instead? >> >> Tony >> >> -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From tony.printezis at sun.com Wed Aug 13 13:35:22 2008 From: tony.printezis at sun.com (Tony Printezis) Date: Wed, 13 Aug 2008 16:35:22 -0400 Subject: Who hates the *Ratio parameters? In-Reply-To: <48A344EE.3010901@sun.com> References: <48A330F3.4070509@sun.com> <48A33279.4050405@sun.com> <48A3361C.3070105@sun.com> <48A344EE.3010901@sun.com> Message-ID: <48A3458A.3010406@sun.com> John, The GC erconomics do largely do that. However, a lot of users still do manualy tuning to get the last oz of performance they can get... Tony John Pampuch wrote: > Tony- > > In the grand scheme of things though, it would be better if we could > figure these out automatically and not need the parameter at all. > > -John > > Tony Printezis wrote: >> Paul, >> >> Paul Hohensee wrote: >> >>> I would. I'd use "SurvivorPercent" though, not "SurvivorPerc". The >>> latter >>> sounds like a bonus you get for surviving. :) >>> >> I was trying to be a be concise, but sure Percent is fine. >> >>> Or maybe just add "SurvivorSize" and "MaxSurvivorSize", like we have >>> "NewSize" and "MaxNewSize". That'd be exact, unlike either a percent >>> or a ratio. >>> >> Well, the problem with using specific sizes is that, if you resize your >> young gen (in this case), the survivor size will also have to change. >> Maybe, using percentages will handle that case a bit better (even though >> I can think of some cases when you want to fix the survivor size but >> maybe vary the eden size). >> >>> btw, we seem to use the term "Ratio" to mean both a genuine ratio _and_ a >>> percent. E.g., "SurvivorRatio" is a genuine ratio, but >>> "MinHeapFreeRatio" >>> is a percent. >>> >> Yep. Unfortunately, we cannot change the meaning of MinHeapFreeRatio (to >> keep backwards compatibility). So, maybe, it'd be best to introduce a >> new parameter, say MinHeapFreePercent, with the same semantics as >> MinHeapFreeRatio, as a way to start migrating users to the correct one. >> >> Tony >> >>> Tony Printezis wrote: >>> >>>> Hi all, >>>> >>>> I personally don't like the *Ratio parameters (e.g., >>>> -XX:SurvivorRatio=) as I don't think they are very intuitive to >>>> set. I've heard the same from a few customers too. Would most people >>>> prefer parameters based on percentages (e.g., -XX:SurvivorPerc=, >>>> where 0 <= <= 100) instead? >>>> >>>> Tony >>>> >>>> >>>> >> >> -- --------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS UBUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA 01803-2756, USA | --------------------------------------------------------------------- e-mail client: Thunderbird (Linux) From Paul.Hohensee at Sun.COM Wed Aug 13 12:14:01 2008 From: Paul.Hohensee at Sun.COM (Paul Hohensee) Date: Wed, 13 Aug 2008 15:14:01 -0400 Subject: Who hates the *Ratio parameters? In-Reply-To: <48A330F3.4070509@sun.com> References: <48A330F3.4070509@sun.com> Message-ID: <48A33279.4050405@sun.com> I would. I'd use "SurvivorPercent" though, not "SurvivorPerc". The latter sounds like a bonus you get for surviving. :) Or maybe just add "SurvivorSize" and "MaxSurvivorSize", like we have "NewSize" and "MaxNewSize". That'd be exact, unlike either a percent or a ratio. btw, we seem to use the term "Ratio" to mean both a genuine ratio _and_ a percent. E.g., "SurvivorRatio" is a genuine ratio, but "MinHeapFreeRatio" is a percent. Paul Tony Printezis wrote: > Hi all, > > I personally don't like the *Ratio parameters (e.g., > -XX:SurvivorRatio=) as I don't think they are very intuitive to set. > I've heard the same from a few customers too. Would most people prefer > parameters based on percentages (e.g., -XX:SurvivorPerc=, where 0 <= > <= 100) instead? > > Tony > > From john.pampuch at sun.com Wed Aug 13 13:32:46 2008 From: john.pampuch at sun.com (John Pampuch) Date: Wed, 13 Aug 2008 13:32:46 -0700 Subject: Who hates the *Ratio parameters? In-Reply-To: <48A3361C.3070105@sun.com> References: <48A330F3.4070509@sun.com> <48A33279.4050405@sun.com> <48A3361C.3070105@sun.com> Message-ID: <48A344EE.3010901@sun.com> An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080813/28a80d99/attachment.html From Y.S.Ramakrishna at Sun.COM Wed Aug 13 14:50:20 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Wed, 13 Aug 2008 14:50:20 -0700 Subject: Who hates the *Ratio parameters? In-Reply-To: <48A344EE.3010901@sun.com> References: <48A330F3.4070509@sun.com> <48A33279.4050405@sun.com> <48A3361C.3070105@sun.com> <48A344EE.3010901@sun.com> Message-ID: Hi Tony -- > I personally don't like the *Ratio parameters (e.g., > -XX:SurvivorRatio=) as I don't think they are very intuitive to > set. I've heard the same from a few customers too. Would most people > prefer parameters based on percentages (e.g., -XX:SurvivorPerc=, > where 0 <= <= 100) instead? One of the (other) disadvantages (lacking floating point options, do we have them now?) of *Ratio specs was that the the smallest ratio you could specify was 1:1. What about allowing percent specs to exceed 100 in appropriate cases where it might make sense? kevlar suit donned :-) -- ramki From tony.printezis at sun.com Thu Aug 14 07:40:02 2008 From: tony.printezis at sun.com (Tony Printezis) Date: Thu, 14 Aug 2008 10:40:02 -0400 Subject: Who hates the *Ratio parameters? In-Reply-To: References: <48A330F3.4070509@sun.com> <48A33279.4050405@sun.com> <48A3361C.3070105@sun.com> <48A344EE.3010901@sun.com> Message-ID: <48A443C2.3080205@sun.com> Ramki, See below. Y Srinivas Ramakrishna wrote: > Hi Tony -- > > >> I personally don't like the *Ratio parameters (e.g., >> -XX:SurvivorRatio=) as I don't think they are very intuitive to >> set. I've heard the same from a few customers too. Would most people >> prefer parameters based on percentages (e.g., -XX:SurvivorPerc=, >> where 0 <= <= 100) instead? >> > > One of the (other) disadvantages (lacking floating point options, do > we have them now?) Actually, we do have floating parameters now (I learned something new today!). I just checked and also did a quick test to make sure it works; it seems to. You can now set something like: product(double, SurvivorPercentage, 13.5 \ "Percentage of survivor space vs. young generation size") \ So, the argument that percentages cannot be small enough, if they are only up to 1%, goes out of the window. > of *Ratio specs was that the the smallest ratio you > could specify was 1:1. What about allowing percent specs to exceed 100 > in appropriate cases where it might make sense? > Like, for example, application throughput goal to be 150%, right? :-) > kevlar suit donned :-) > :-) Tony > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From Y.S.Ramakrishna at Sun.COM Thu Aug 14 09:53:01 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Thu, 14 Aug 2008 09:53:01 -0700 Subject: Who hates the *Ratio parameters? In-Reply-To: <48A443C2.3080205@sun.com> References: <48A330F3.4070509@sun.com> <48A33279.4050405@sun.com> <48A3361C.3070105@sun.com> <48A344EE.3010901@sun.com> <48A443C2.3080205@sun.com> Message-ID: > > of *Ratio specs was that the the smallest ratio you > > could specify was 1:1. What about allowing percent specs to exceed 100 > > in appropriate cases where it might make sense? > > > Like, for example, application throughput goal to be 150%, right? :-) That, of course, which would be hugely popular with most of our Java users, :-) but, more specifically, would having, for example, a survivor size that is greater than Eden size (for example, via SurvivorPercentage=150) make sense under some scenario of performance objectives/constraints? -- ramki From tony.printezis at sun.com Thu Aug 14 09:58:38 2008 From: tony.printezis at sun.com (Tony Printezis) Date: Thu, 14 Aug 2008 12:58:38 -0400 Subject: Who hates the *Ratio parameters? In-Reply-To: References: <48A330F3.4070509@sun.com> <48A33279.4050405@sun.com> <48A3361C.3070105@sun.com> <48A344EE.3010901@sun.com> <48A443C2.3080205@sun.com> Message-ID: <48A4643E.8080903@sun.com> Ramki, See below. Y Srinivas Ramakrishna wrote: > >>> of *Ratio specs was that the the smallest ratio you >>> could specify was 1:1. What about allowing percent specs to exceed 100 >>> in appropriate cases where it might make sense? >>> >>> >> Like, for example, application throughput goal to be 150%, right? :-) >> > > That, of course, which would be hugely popular with most of our Java users, :-) > :-) > but, more specifically, would having, for example, a survivor size > that is greater than Eden size (for example, via SurvivorPercentage=150) > make sense under some scenario of performance objectives/constraints? > Hmmm.... interesting. I had somehow assumed that the survivor percentage would be with respect to the young gen size, not with respect to the eden. I'm not sure which one is best. But, yes, you're right; we should allow for the possibility of the survivors being larger than the eden. Additionally, someone, in a private communication (you know who you are!), also recommended that we actually provide EdenPercent instead of SurvivorPercent. Again, I'm not sure what the pros / cons are. Tony -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From Y.S.Ramakrishna at Sun.COM Thu Aug 14 10:04:27 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Thu, 14 Aug 2008 10:04:27 -0700 Subject: Who hates the *Ratio parameters? In-Reply-To: <48A4643E.8080903@sun.com> References: <48A330F3.4070509@sun.com> <48A33279.4050405@sun.com> <48A3361C.3070105@sun.com> <48A344EE.3010901@sun.com> <48A443C2.3080205@sun.com> <48A4643E.8080903@sun.com> Message-ID: > > but, more specifically, would having, for example, a survivor size > > that is greater than Eden size (for example, via SurvivorPercentage=150) > > make sense under some scenario of performance objectives/constraints? > > > Hmmm.... interesting. I had somehow assumed that the survivor > percentage > would be with respect to the young gen size, not with respect to the > eden. I'm not sure which one is best. But, yes, you're right; we > should > allow for the possibility of the survivors being larger than the eden. Yes, I agree that SurvivorPercent = SurvivorSize/YoungGenSize makes more sense. Sorry for the confusion. -- ramki > > Additionally, someone, in a private communication (you know who you > are!), also recommended that we actually provide EdenPercent instead > of > SurvivorPercent. Again, I'm not sure what the pros / cons are. > > Tony > > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | > | | MS BUR02-311 | > | e-mail: tony.printezis at sun.com | 35 Network Drive | > | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > From Keith.Holdaway at sas.com Mon Aug 18 05:45:24 2008 From: Keith.Holdaway at sas.com (Keith Holdaway) Date: Mon, 18 Aug 2008 08:45:24 -0400 Subject: JDK 5.0 Native Heap Issues? Message-ID: <304E9E55F6A4BE4B910C2437D4D1B4960B22106559@MERCMBX14.na.sas.com> Hi, We have been running an endurance test suite with LoadRunner against JDK 5.0 u14 with the following VM arguments: et JAVA_OPTS=%JAVA_OPTS% -Xms1000m -Xmx1000m -XX:PermSize=87m -XX:MaxPermSize=87m -Xss96k -XX:-UseTLAB -XX:+UseConcMarkSweepGC -XX:+DisableExplicitGC -XX:NewSize=128m -XX:MaxNewSize=128m -Dcom.sun.management.jmxremote -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.awt.headless=true -Dsas.svcs.http.max.connections=50 And we keep seeing the following error message after 15 hours: Exception java.lang.OutOfMemoryError: requested 655360 bytes for GrET* in C:/BUILD_AREA/jdk1.5.0_15/hotspot\src\share\vm\utilities\growableArray.cpp. Out of swap space? I suggested that this error is the result of native heap issues - fragmentation perhaps, and so reducing the -Xmx and -Xss and MaxPermGen would enable more native heap. This is a 32 bit Windows box, and the /3GB switch is turned on. The tester has added the following two VM args to enable an improvement in CMS usage, since it seems our application allocates at such a rate that CMS is overrun and Full GCs interrupt the CMS algorithm: -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=40 But he also changed the JDK to 6.0 u7, and now the endurance test has run for 25 hrs? We are not sure if the success is contributed to JDK6 or to -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=40 Any ideas? Also, is the -XX:+UseCMSCompactAtFullCollection a default behaviour for JDK 5.0? thanks keith _ From Jon.Masamitsu at Sun.COM Mon Aug 18 06:55:21 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 18 Aug 2008 06:55:21 -0700 Subject: JDK 5.0 Native Heap Issues? In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B4960B22106559@MERCMBX14.na.sas.com> References: <304E9E55F6A4BE4B910C2437D4D1B4960B22106559@MERCMBX14.na.sas.com> Message-ID: <48A97F49.7060609@sun.com> Keith, One of the changes made in jdk6 was that the "always promote" policy of CMS (i.e., very small survivor spaces and maximum tenuring threshold) was change so as to better utilize the young gen to filter out should lived data. The change is described in http://java.sun.com/javase/6/webnotes/adoption/adoptionguide.html Search for the new flag CMSUseOldDefaults. I've seen this change help in terms of slowing the CMS generation growth but your suggestion of a lower initiating occupancy would do something similar in a different way. My guess though is that the jdk6 changes are helping more. UseCMSCompactAtFullCollection is on by default in jdk5. Jon Keith Holdaway wrote On 08/18/08 05:45,: >Hi, > >We have been running an endurance test suite with LoadRunner against JDK 5.0 u14 with the following VM arguments: > >et JAVA_OPTS=%JAVA_OPTS% -Xms1000m -Xmx1000m -XX:PermSize=87m -XX:MaxPermSize=87m -Xss96k -XX:-UseTLAB -XX:+UseConcMarkSweepGC -XX:+DisableExplicitGC -XX:NewSize=128m -XX:MaxNewSize=128m -Dcom.sun.management.jmxremote -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.awt.headless=true -Dsas.svcs.http.max.connections=50 > >And we keep seeing the following error message after 15 hours: > >Exception java.lang.OutOfMemoryError: requested 655360 bytes for GrET* in C:/BUILD_AREA/jdk1.5.0_15/hotspot\src\share\vm\utilities\growableArray.cpp. Out of swap space? > >I suggested that this error is the result of native heap issues - fragmentation perhaps, and so reducing the -Xmx and -Xss and MaxPermGen would enable more native heap. This is a 32 bit Windows box, and the /3GB switch is turned on. > >The tester has added the following two VM args to enable an improvement in CMS usage, since it seems our application allocates at such a rate that CMS is overrun and Full GCs interrupt the CMS algorithm: > >-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=40 > >But he also changed the JDK to 6.0 u7, and now the endurance test has run for 25 hrs? > >We are not sure if the success is contributed to JDK6 or to -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=40 > >Any ideas? > >Also, is the -XX:+UseCMSCompactAtFullCollection a default behaviour for JDK 5.0? > >thanks > >keith >_ >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > From neeraj0jain at gmail.com Mon Aug 18 07:55:22 2008 From: neeraj0jain at gmail.com (Neeraj Jain) Date: Mon, 18 Aug 2008 20:25:22 +0530 Subject: 1.4.2 PrintFLSStatistics Output Message-ID: We are using Java 1.4.2_17 (64-bit mode) to run long running server application on solaris using following java options -server -d64 -XX:+HandlePromotionFailure -XX:CMSInitiatingOccupancyFraction=70 -XX:NewSize=256m -XX:MaxNewSize=2 56m -XX:SurvivorRatio=10 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+PrintTenuringDistribution -XX:PrintCMSStatistics=1 -XX:+PrintGCDetails -XX:+Pri ntGCTimeStamps -XX:PrintFLSStatistics=1 -XX:+PrintHeapAtGC -XX:+PrintClassHistogram -XX:PermSize=96m -XX:MaxPermSize=96m -XX:MaxTenuringThreshold=5 -X ms6g -Xmx6g -Xss256k In order to isolate a fragmentation issue, we started using -XX:PrintFLSStatistics option to see if the fragmentation is really taking place. However, the output looks quite confusing. I have following specific queries around this (please refer to the GC logs given below): 1. As per GC stats, around 5 GB of memory is taken by the application so there should be ~0.75 GB free in tenured generation but BinaryTreeDictionary statistics show only around 38 MB of memory in Free List Trees? Where have the rest of 700 MB gone? 2. Going by these stats, the total free space is only 349438 words (~2.8 MB) but still minor collections succeed (before eventually getting promotion failed error). Where does the free space to provide Young Generation Guarantee (~240 MB) come from? 3. Why there are two sets of BinaryTreeDictionary stats? Any help will be greatly appreciated. Thanks in advance. Neeraj =======================================JVM GC Logs==================================================== 40330.741: [ParNewBefore GC: Before GC: Desired survivor size 11173888 bytes, new threshold 2 (max 5) - age 1: 7549224 bytes, 7549224 total - age 2: 5868416 bytes, 13417640 total : 231488K->13203K(240320K), 0.4537259 secs] 5235545K->5023066K(6269632K)Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 349438 Max Chunk Size: 349438 Number of Blocks: 1 Av. Block Size: 349438 Tree Height: 1 Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 4440679 Max Chunk Size: 4440679 Number of Blocks: 1 Av. Block Size: 4440679 Tree Height: 1 Heap after GC invocations=2396: Heap par new generation total 240320K, used 13203K [0xfffffffdefc00000, 0xfffffffdffc00000, 0xfffffffdffc00000) eden space 218496K, 0% used [0xfffffffdefc00000, 0xfffffffdefc00000, 0xfffffffdfd160000) from space 21824K, 60% used [0xfffffffdfd160000, 0xfffffffdfde44d50, 0xfffffffdfe6b0000) to space 21824K, 0% used [0xfffffffdfe6b0000, 0xfffffffdfe6b0000, 0xfffffffdffc00000) concurrent mark-sweep generation total 6029312K, used 5009862K [0xfffffffdffc00000, 0xffffffff6fc00000, 0xffffffff6fc00000) concurrent-mark-sweep perm gen total 98304K, used 63210K [0xffffffff6fc00000, 0xffffffff75c00000, 0xffffffff75c00000) } , 0.4557118 secs] After GC: After GC: 40344.958: [GC {Heap before GC invocations=2396: Heap par new generation total 240320K, used 231699K [0xfffffffdefc00000, 0xfffffffdffc00000, 0xfffffffdffc00000) eden space 218496K, 100% used [0xfffffffdefc00000, 0xfffffffdfd160000, 0xfffffffdfd160000) from space 21824K, 60% used [0xfffffffdfd160000, 0xfffffffdfde44d50, 0xfffffffdfe6b0000) to space 21824K, 0% used [0xfffffffdfe6b0000, 0xfffffffdfe6b0000, 0xfffffffdffc00000) concurrent mark-sweep generation total 6029312K, used 5008129K [0xfffffffdffc00000, 0xffffffff6fc00000, 0xffffffff6fc00000) concurrent-mark-sweep perm gen total 98304K, used 63210K [0xffffffff6fc00000, 0xffffffff75c00000, 0xffffffff75c00000) Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 353702 Max Chunk Size: 349438 Number of Blocks: 9 Av. Block Size: 39300 Tree Height: 5 Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 4440679 Max Chunk Size: 4440679 Number of Blocks: 1 Av. Block Size: 4440679 Tree Height: 1 40344.960: [ParNewBefore GC: Before GC: (promotion failed) Desired survivor size 11173888 bytes, new threshold 2 (max 5) - age 1: 7604792 bytes, 7604792 total - age 2: 5978704 bytes, 13583496 total : 231699K->231699K(240320K), 1.2985705 secs]40346.259: [CMS40358.345: [CMS-concurrent-sweep: 25.860/27.699 secs] (CMS-concurrent-sweep yielded 2 times) (concurrent mode failure)[Unloading class sun.reflect.GeneratedSerializationConstructorAccessor93] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor30] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor55] -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080818/1fba1d8c/attachment.html From Y.S.Ramakrishna at Sun.COM Mon Aug 18 09:11:37 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Mon, 18 Aug 2008 09:11:37 -0700 Subject: 1.4.2 PrintFLSStatistics Output In-Reply-To: References: Message-ID: Hi Neeraj -- > We are using Java 1.4.2_17 (64-bit mode) to run long running server > application on solaris using following java options > > -server -d64 -XX:+HandlePromotionFailure > -XX:CMSInitiatingOccupancyFraction=70 -XX:NewSize=256m -XX:MaxNewSize=2 > 56m -XX:SurvivorRatio=10 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC > -XX:+PrintTenuringDistribution -XX:PrintCMSStatistics=1 -XX:+PrintGCDetails > -XX:+Pri > ntGCTimeStamps -XX:PrintFLSStatistics=1 -XX:+PrintHeapAtGC > -XX:+PrintClassHistogram -XX:PermSize=96m -XX:MaxPermSize=96m > -XX:MaxTenuringThreshold=5 -X > ms6g -Xmx6g -Xss256k > > In order to isolate a fragmentation issue, we started using > -XX:PrintFLSStatistics option to see if the fragmentation is really taking > place. However, the output looks quite confusing. I have following specific > queries around this (please refer to the GC logs given below): > > 1. As per GC stats, around 5 GB of memory is taken by the application > so > there should be ~0.75 GB free in tenured generation but BinaryTreeDictionary > statistics show only around 38 MB of memory in Free List Trees? Where > have > the rest of 700 MB gone? The free space in the old generation is kept mainly in two kinds of free lists. Free blocks larger than some threshold (i think 1 KB, but am not sure without checking the code) are kept in a binary tree indexed by size and printed as part of the "BinaryTreeDictionary". Free blocks smaller than that threshold are kept in an array of free lists indexed by size. Since most objects created by Java programs are usually smaller than the 1 KB threshold, you will usually find that most of the dynamic footprint in the old generation is in the smnaller blocks. > > 2. Going by these stats, the total free space is only 349438 words > (~2.8 MB) > but still minor collections succeed (before eventually getting promotion > failed error). Where does the free space to provide Young Generation > Guarantee (~240 MB) come from? See above; the majority of the churn is expected to be in the smaller blocks, with the larger ones providing a buffer for volatility in the demand for the smaller blocks (plus the occasional large object or two) -- provided the free block demand estimation and coalescing is working OK. > > 3. Why there are two sets of BinaryTreeDictionary stats? There is one set for each of the Old Generation and the Perm Generation, both of which are marked by the CMS collector. (But perm generation collection requires an additional flag or two.) > > Any help will be greatly appreciated. Thanks in advance. Hope that helps, and feel free to ask more questions, and/or study the associated code in OpenJDK 7 (src/share/vm/gc_implementation/concurrentMarkSweep). The option -XX:PrintFLSStatistics=2 produces (at much higher cost) stats related to the smaller free blocks kept in the array(s) indexed by size as well. -- ramki > > Neeraj > > =======================================JVM GC > Logs==================================================== > > 40330.741: [ParNewBefore GC: > Before GC: > > Desired survivor size 11173888 bytes, new threshold 2 (max 5) > - age 1: 7549224 bytes, 7549224 total > - age 2: 5868416 bytes, 13417640 total > : 231488K->13203K(240320K), 0.4537259 secs] > 5235545K->5023066K(6269632K)Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 349438 > Max Chunk Size: 349438 > Number of Blocks: 1 > Av. Block Size: 349438 > Tree Height: 1 > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 4440679 > Max Chunk Size: 4440679 > Number of Blocks: 1 > Av. Block Size: 4440679 > Tree Height: 1 > Heap after GC invocations=2396: > Heap > par new generation total 240320K, used 13203K [0xfffffffdefc00000, > 0xfffffffdffc00000, 0xfffffffdffc00000) > eden space 218496K, 0% used [0xfffffffdefc00000, 0xfffffffdefc00000, > 0xfffffffdfd160000) > from space 21824K, 60% used [0xfffffffdfd160000, 0xfffffffdfde44d50, > 0xfffffffdfe6b0000) > to space 21824K, 0% used [0xfffffffdfe6b0000, 0xfffffffdfe6b0000, > 0xfffffffdffc00000) > concurrent mark-sweep generation total 6029312K, used 5009862K > [0xfffffffdffc00000, 0xffffffff6fc00000, 0xffffffff6fc00000) > concurrent-mark-sweep perm gen total 98304K, used 63210K > [0xffffffff6fc00000, 0xffffffff75c00000, 0xffffffff75c00000) > } , 0.4557118 secs] > After GC: > After GC: > 40344.958: [GC {Heap before GC invocations=2396: > Heap > par new generation total 240320K, used 231699K [0xfffffffdefc00000, > 0xfffffffdffc00000, 0xfffffffdffc00000) > eden space 218496K, 100% used [0xfffffffdefc00000, 0xfffffffdfd160000, > 0xfffffffdfd160000) > from space 21824K, 60% used [0xfffffffdfd160000, 0xfffffffdfde44d50, > 0xfffffffdfe6b0000) > to space 21824K, 0% used [0xfffffffdfe6b0000, 0xfffffffdfe6b0000, > 0xfffffffdffc00000) > concurrent mark-sweep generation total 6029312K, used 5008129K > [0xfffffffdffc00000, 0xffffffff6fc00000, 0xffffffff6fc00000) > concurrent-mark-sweep perm gen total 98304K, used 63210K > [0xffffffff6fc00000, 0xffffffff75c00000, 0xffffffff75c00000) > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 353702 > Max Chunk Size: 349438 > Number of Blocks: 9 > Av. Block Size: 39300 > Tree Height: 5 > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 4440679 > Max Chunk Size: 4440679 > Number of Blocks: 1 > Av. Block Size: 4440679 > Tree Height: 1 > 40344.960: [ParNewBefore GC: > Before GC: > (promotion failed) > Desired survivor size 11173888 bytes, new threshold 2 (max 5) > - age 1: 7604792 bytes, 7604792 total > - age 2: 5978704 bytes, 13583496 total > : 231699K->231699K(240320K), 1.2985705 secs]40346.259: [CMS40358.345: > [CMS-concurrent-sweep: 25.860/27.699 secs] > (CMS-concurrent-sweep yielded 2 times) > (concurrent mode failure)[Unloading class > sun.reflect.GeneratedSerializationConstructorAccessor93] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor30] > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor55] > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Y.S.Ramakrishna at Sun.COM Mon Aug 18 14:29:27 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Mon, 18 Aug 2008 14:29:27 -0700 Subject: JDK 5.0 Native Heap Issues? In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B4960B22106559@MERCMBX14.na.sas.com> References: <304E9E55F6A4BE4B910C2437D4D1B4960B22106559@MERCMBX14.na.sas.com> Message-ID: Hi Keith -- To add to Jon's response, native heap issues would typically tend to be orthogonal to the frequency or otherwise of Java heap collections. (That's in theory, but for instance, native heap leaks from for example Java full heap collections can cause slow leaks in native heap memory of course; but see more below.) The native heap leak might also come from something else in the JVM which may have gone away when you switched the JDK's. It is quite unlikely, though, that a native heap memory leak is tied explicitly to the behaviour of CMS and of the frequency of full heap collections. There is however another way in which Java heap collections may relate to native heap pressure, related to what Jon was saying. In JDK 6 CMS, as Jon stated, all scavenge-survivors are not immediately promoted into the old generation. That means that typically a piece of dead space (i.e. occupied by a dead object) is likely recycled sooner than with the default settings in JDK 1.5 CMS, where objects may get prematurely promoted and then immediately die, but languish uncollected in the old generation because of the realtive infrequency of those collections vis-a-vis scavenges. The fact that unreachable objects are identified sooner in JDK 6 CMS/default settings (when they die in the young generation) means that, if there are associated finalizers or other kinds of weak reference objects, then those are enqueued sooner and any post-mortem clean-ups (including potentially any associated native storage) might likely get run sooner (thus possibly reducing the native heap footprint as well). Do you believe you may have in your application or in associated 3rd party libraries native memory or resources potentially tied to objects requiring finalization or other clean-up? If so, you might want to monitor that churn/activity. -- ramki ----- Original Message ----- From: Keith Holdaway Date: Monday, August 18, 2008 5:45 am Subject: JDK 5.0 Native Heap Issues? To: Y Srinivas Ramakrishna , "Jones, Doug H" Cc: "hotspot-gc-use at openjdk.java.net" > Hi, > > We have been running an endurance test suite with LoadRunner against > JDK 5.0 u14 with the following VM arguments: > > et JAVA_OPTS=%JAVA_OPTS% -Xms1000m -Xmx1000m -XX:PermSize=87m > -XX:MaxPermSize=87m -Xss96k -XX:-UseTLAB -XX:+UseConcMarkSweepGC > -XX:+DisableExplicitGC -XX:NewSize=128m -XX:MaxNewSize=128m > -Dcom.sun.management.jmxremote -Dsun.rmi.dgc.client.gcInterval=3600000 > -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.awt.headless=true -Dsas.svcs.http.max.connections=50 > > And we keep seeing the following error message after 15 hours: > > Exception java.lang.OutOfMemoryError: requested 655360 bytes for GrET* > in > C:/BUILD_AREA/jdk1.5.0_15/hotspot\src\share\vm\utilities\growableArray.cpp. > Out of swap space? > > I suggested that this error is the result of native heap issues - > fragmentation perhaps, and so reducing the -Xmx and -Xss and > MaxPermGen would enable more native heap. This is a 32 bit Windows > box, and the /3GB switch is turned on. > > The tester has added the following two VM args to enable an > improvement in CMS usage, since it seems our application allocates at > such a rate that CMS is overrun and Full GCs interrupt the CMS algorithm: > > -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=40 > > But he also changed the JDK to 6.0 u7, and now the endurance test has > run for 25 hrs? > > We are not sure if the success is contributed to JDK6 or to > -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=40 > > Any ideas? > > Also, is the -XX:+UseCMSCompactAtFullCollection a default behaviour > for JDK 5.0? > > thanks > > keith > _ From neeraj0jain at gmail.com Tue Aug 19 03:36:52 2008 From: neeraj0jain at gmail.com (Neeraj Jain) Date: Tue, 19 Aug 2008 16:06:52 +0530 Subject: 1.4.2 PrintFLSStatistics Output In-Reply-To: References: Message-ID: Hi Ramki, Thanks a lot for your response. I still have a query on your answer to question #2. As I understand, for a YG promotion to be successful GC needs a contiguous memory chunk equal to the sum of sizes of Eden and From space which comes out to be approx. 230 MB in our case. The total free space in the "BinaryTreeDictionary" containing larger blocks is only ~2.8 MB but the promotions are still succeeding. My question is: *where the GC is getting the contiguous memory chunk of 230 MB from?* Thanks & Regards, Neeraj On Mon, Aug 18, 2008 at 9:41 PM, Y Srinivas Ramakrishna < Y.S.Ramakrishna at sun.com> wrote: > Hi Neeraj -- > > > We are using Java 1.4.2_17 (64-bit mode) to run long running server > > application on solaris using following java options > > > > -server -d64 -XX:+HandlePromotionFailure > > -XX:CMSInitiatingOccupancyFraction=70 -XX:NewSize=256m -XX:MaxNewSize=2 > > 56m -XX:SurvivorRatio=10 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC > > -XX:+PrintTenuringDistribution -XX:PrintCMSStatistics=1 > -XX:+PrintGCDetails > > -XX:+Pri > > ntGCTimeStamps -XX:PrintFLSStatistics=1 -XX:+PrintHeapAtGC > > -XX:+PrintClassHistogram -XX:PermSize=96m -XX:MaxPermSize=96m > > -XX:MaxTenuringThreshold=5 -X > > ms6g -Xmx6g -Xss256k > > > > In order to isolate a fragmentation issue, we started using > > -XX:PrintFLSStatistics option to see if the fragmentation is really > taking > > place. However, the output looks quite confusing. I have following > specific > > queries around this (please refer to the GC logs given below): > > > > 1. As per GC stats, around 5 GB of memory is taken by the application > > so > > there should be ~0.75 GB free in tenured generation but > BinaryTreeDictionary > > statistics show only around 38 MB of memory in Free List Trees? Where > > have > > the rest of 700 MB gone? > > The free space in the old generation is kept mainly in two kinds of free > lists. > Free blocks larger than some threshold (i think 1 KB, but am not sure > without > checking the code) are kept in a binary tree indexed by size and printed as > part of the "BinaryTreeDictionary". Free blocks smaller than that threshold > are kept in an array of free lists indexed by size. Since most objects > created by Java programs are usually smaller than the 1 KB threshold, you > will usually find that most of the dynamic footprint in the old generation > is in the smnaller blocks. > > > > > 2. Going by these stats, the total free space is only 349438 words > > (~2.8 MB) > > but still minor collections succeed (before eventually getting promotion > > failed error). Where does the free space to provide Young Generation > > Guarantee (~240 MB) come from? > > See above; the majority of the churn is expected to be in the smaller > blocks, with the larger ones providing a buffer for volatility in the > demand for the smaller blocks (plus the occasional large object or two) > -- provided the free block demand estimation and coalescing is working > OK. > > > > > 3. Why there are two sets of BinaryTreeDictionary stats? > > There is one set for each of the Old Generation and the Perm Generation, > both of which are marked by the CMS collector. (But perm generation > collection requires an additional flag or two.) > > > > > Any help will be greatly appreciated. Thanks in advance. > > Hope that helps, and feel free to ask more questions, and/or study the > associated code in OpenJDK 7 > (src/share/vm/gc_implementation/concurrentMarkSweep). > > The option -XX:PrintFLSStatistics=2 produces (at much higher cost) > stats related to the smaller free blocks kept in the array(s) indexed by > size as well. > > -- ramki > > > > > Neeraj > > > > =======================================JVM GC > > Logs==================================================== > > > > 40330.741: [ParNewBefore GC: > > Before GC: > > > > Desired survivor size 11173888 bytes, new threshold 2 (max 5) > > - age 1: 7549224 bytes, 7549224 total > > - age 2: 5868416 bytes, 13417640 total > > : 231488K->13203K(240320K), 0.4537259 secs] > > 5235545K->5023066K(6269632K)Statistics for BinaryTreeDictionary: > > ------------------------------------ > > Total Free Space: 349438 > > Max Chunk Size: 349438 > > Number of Blocks: 1 > > Av. Block Size: 349438 > > Tree Height: 1 > > Statistics for BinaryTreeDictionary: > > ------------------------------------ > > Total Free Space: 4440679 > > Max Chunk Size: 4440679 > > Number of Blocks: 1 > > Av. Block Size: 4440679 > > Tree Height: 1 > > Heap after GC invocations=2396: > > Heap > > par new generation total 240320K, used 13203K [0xfffffffdefc00000, > > 0xfffffffdffc00000, 0xfffffffdffc00000) > > eden space 218496K, 0% used [0xfffffffdefc00000, 0xfffffffdefc00000, > > 0xfffffffdfd160000) > > from space 21824K, 60% used [0xfffffffdfd160000, 0xfffffffdfde44d50, > > 0xfffffffdfe6b0000) > > to space 21824K, 0% used [0xfffffffdfe6b0000, 0xfffffffdfe6b0000, > > 0xfffffffdffc00000) > > concurrent mark-sweep generation total 6029312K, used 5009862K > > [0xfffffffdffc00000, 0xffffffff6fc00000, 0xffffffff6fc00000) > > concurrent-mark-sweep perm gen total 98304K, used 63210K > > [0xffffffff6fc00000, 0xffffffff75c00000, 0xffffffff75c00000) > > } , 0.4557118 secs] > > After GC: > > After GC: > > 40344.958: [GC {Heap before GC invocations=2396: > > Heap > > par new generation total 240320K, used 231699K [0xfffffffdefc00000, > > 0xfffffffdffc00000, 0xfffffffdffc00000) > > eden space 218496K, 100% used [0xfffffffdefc00000, 0xfffffffdfd160000, > > 0xfffffffdfd160000) > > from space 21824K, 60% used [0xfffffffdfd160000, 0xfffffffdfde44d50, > > 0xfffffffdfe6b0000) > > to space 21824K, 0% used [0xfffffffdfe6b0000, 0xfffffffdfe6b0000, > > 0xfffffffdffc00000) > > concurrent mark-sweep generation total 6029312K, used 5008129K > > [0xfffffffdffc00000, 0xffffffff6fc00000, 0xffffffff6fc00000) > > concurrent-mark-sweep perm gen total 98304K, used 63210K > > [0xffffffff6fc00000, 0xffffffff75c00000, 0xffffffff75c00000) > > Statistics for BinaryTreeDictionary: > > ------------------------------------ > > Total Free Space: 353702 > > Max Chunk Size: 349438 > > Number of Blocks: 9 > > Av. Block Size: 39300 > > Tree Height: 5 > > Statistics for BinaryTreeDictionary: > > ------------------------------------ > > Total Free Space: 4440679 > > Max Chunk Size: 4440679 > > Number of Blocks: 1 > > Av. Block Size: 4440679 > > Tree Height: 1 > > 40344.960: [ParNewBefore GC: > > Before GC: > > (promotion failed) > > Desired survivor size 11173888 bytes, new threshold 2 (max 5) > > - age 1: 7604792 bytes, 7604792 total > > - age 2: 5978704 bytes, 13583496 total > > : 231699K->231699K(240320K), 1.2985705 secs]40346.259: [CMS40358.345: > > [CMS-concurrent-sweep: 25.860/27.699 secs] > > (CMS-concurrent-sweep yielded 2 times) > > (concurrent mode failure)[Unloading class > > sun.reflect.GeneratedSerializationConstructorAccessor93] > > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor30] > > [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor55] > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080819/ae038a83/attachment.html From Y.S.Ramakrishna at Sun.COM Tue Aug 19 10:42:33 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Tue, 19 Aug 2008 10:42:33 -0700 Subject: 1.4.2 PrintFLSStatistics Output In-Reply-To: References: Message-ID: Hi Neeraj -- Good question; this has come up in another recent discussion on this list. > I still have a query on your answer to question #2. As I understand, > for a > YG promotion to be successful GC needs a contiguous memory chunk equal > to > the sum of sizes of Eden and From space which comes out to be approx. > 230 MB > in our case. The total free space in the "BinaryTreeDictionary" containing > larger blocks is only ~2.8 MB but the promotions are still succeeding. > > My question is: *where the GC is getting the contiguous memory chunk > of 230 > MB from?* That was indeed the old constraint in the early versions of 1.4.2, which was fixed in some early version of 5.0 (if i recall correctly) so that we could handle "promotion failure" (where mid-scavenge we discover that we have no space in the old gen to promote a live object from the young gen). That code was subsequently also backported to some version of 1.4.2, probably 1.4.2_11 or later (i am not sure precisely which version, but can find out if you really care). As a result, (at least) in 1.4.2_17, in fact, there isn't such a restriction, and scavenges will occur even in the absence of the pessimal "full promotion guarantee". -- ramki From neeraj0jain at gmail.com Wed Aug 20 04:15:11 2008 From: neeraj0jain at gmail.com (Neeraj Jain) Date: Wed, 20 Aug 2008 16:45:11 +0530 Subject: 1.4.2 PrintFLSStatistics Output In-Reply-To: References: Message-ID: Thanks Ramki. This has cleared lots of our doubts. To complete the discussion I have one more query left in the same context. We have seen the YG promotions succeeding with BinaryTreeDictionary containing Max Chunk Sizes as small as 600 words (4800 bytes) though Total Free Space was much higher. Does that mean that, in addition to not requiring space to accommodate full Eden+From spaces, java 1.4.2_17 no longer has the constraint needing "*single contiguous memory chunk*" also to accommodate all the promoted objects in old generation? Regards, Neeraj On Tue, Aug 19, 2008 at 11:12 PM, Y Srinivas Ramakrishna < Y.S.Ramakrishna at sun.com> wrote: > > Hi Neeraj -- > > Good question; this has come up in another recent discussion on > this list. > > > I still have a query on your answer to question #2. As I understand, > > for a > > YG promotion to be successful GC needs a contiguous memory chunk equal > > to > > the sum of sizes of Eden and From space which comes out to be approx. > > 230 MB > > in our case. The total free space in the "BinaryTreeDictionary" > containing > > larger blocks is only ~2.8 MB but the promotions are still succeeding. > > > > My question is: *where the GC is getting the contiguous memory chunk > > of 230 > > MB from?* > > > That was indeed the old constraint in the early versions of 1.4.2, > which was fixed in some early version of 5.0 (if i recall correctly) so > that we could handle "promotion failure" (where mid-scavenge we > discover that we have no space in the old gen to promote a live > object from the young gen). That code was subsequently also backported > to some version of 1.4.2, probably 1.4.2_11 or later (i am not > sure precisely which version, but can find out if you really care). > As a result, (at least) in 1.4.2_17, in fact, there isn't such a > restriction, > and scavenges will occur even in the absence of the pessimal "full > promotion guarantee". > > -- ramki > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080820/f3a7e73f/attachment.html From Y.S.Ramakrishna at Sun.COM Wed Aug 20 09:33:55 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Wed, 20 Aug 2008 09:33:55 -0700 Subject: 1.4.2 PrintFLSStatistics Output In-Reply-To: References: Message-ID: Hi Neeraj -- > Thanks Ramki. This has cleared lots of our doubts. To complete the > discussion I have one more query left in the same context. > > We have seen the YG promotions succeeding with BinaryTreeDictionary > containing Max Chunk Sizes as small as 600 words (4800 bytes) though Total > Free Space was much higher. Does that mean that, in addition to not > requiring space to accommodate full Eden+From spaces, java 1.4.2_17 no > longer has the constraint needing "*single contiguous memory chunk*" > also to > accommodate all the promoted objects in old generation? Yes. Perhaps I was not clear on how this works and why we needed the more pessimal "single contiguous memory chunk" restriction until recently in 1.4.2_XX. Basically, previous to the fixes that allowed us to bail out midway from a failed scavenge, scavenges would be "all or nothing". We _had_ to be absolutely certain that all of the objects surviving a scavenge could be accomodated in the old generation because we did not have the means to "interrupt" a (failed) scavenge mid-way and revert to full mark-sweep compact. The simplest way of guaranteeing that a scavenge succeeded would be of guaranteeing that the old gen had a single contiguous free block that could accomodate all of eden + survivor. However, once we had the means of bailing from a failed scavenge to a full mark-sweep compact, we could now relax that guarantee, and instead try to _estimate_ with high probability when a scavenge would succeed. (In our case, I think we took the simple route of relying on the sweeper to estimate block demand correctly -- something that we do fairly well when block size distribution is stationary and there's sufficient space in the old generation to absorb occasional volatility -- and somewhat optimistically assume that if there's enough free space in the old generation that scavenges will succeed. Of course, when these assumptions break down, as they sometimes do, when there is not sufficient space in the old generation or we fail in correctly estimating block demand, then scavenges do fail and in that case we bail to full mark-sweep compact at some considerable cost in terms of pause-time.) Hope that answered yr question. -- ramki > > Regards, > Neeraj > > On Tue, Aug 19, 2008 at 11:12 PM, Y Srinivas Ramakrishna < > Y.S.Ramakrishna at sun.com> wrote: > > > > > Hi Neeraj -- > > > > Good question; this has come up in another recent discussion on > > this list. > > > > > I still have a query on your answer to question #2. As I understand, > > > for a > > > YG promotion to be successful GC needs a contiguous memory chunk equal > > > to > > > the sum of sizes of Eden and From space which comes out to be approx. > > > 230 MB > > > in our case. The total free space in the "BinaryTreeDictionary" > > containing > > > larger blocks is only ~2.8 MB but the promotions are still succeeding. > > > > > > My question is: *where the GC is getting the contiguous memory chunk > > > of 230 > > > MB from?* > > > > > > That was indeed the old constraint in the early versions of 1.4.2, > > which was fixed in some early version of 5.0 (if i recall correctly) > so > > that we could handle "promotion failure" (where mid-scavenge we > > discover that we have no space in the old gen to promote a live > > object from the young gen). That code was subsequently also backported > > to some version of 1.4.2, probably 1.4.2_11 or later (i am not > > sure precisely which version, but can find out if you really care). > > As a result, (at least) in 1.4.2_17, in fact, there isn't such a > > restriction, > > and scavenges will occur even in the absence of the pessimal "full > > promotion guarantee". > > > > -- ramki > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From neeraj0jain at gmail.com Wed Aug 20 09:54:14 2008 From: neeraj0jain at gmail.com (Neeraj Jain) Date: Wed, 20 Aug 2008 22:24:14 +0530 Subject: 1.4.2 PrintFLSStatistics Output In-Reply-To: References: Message-ID: Hi Ramki, Thanks for the pain you took to explain the details. Appreciate your help. Neeraj On Wed, Aug 20, 2008 at 10:03 PM, Y Srinivas Ramakrishna < Y.S.Ramakrishna at sun.com> wrote: > > Hi Neeraj -- > > > Thanks Ramki. This has cleared lots of our doubts. To complete the > > discussion I have one more query left in the same context. > > > > We have seen the YG promotions succeeding with BinaryTreeDictionary > > containing Max Chunk Sizes as small as 600 words (4800 bytes) though > Total > > Free Space was much higher. Does that mean that, in addition to not > > requiring space to accommodate full Eden+From spaces, java 1.4.2_17 no > > longer has the constraint needing "*single contiguous memory chunk*" > > also to > > accommodate all the promoted objects in old generation? > > Yes. Perhaps I was not clear on how this works and why we needed the > more pessimal "single contiguous memory chunk" restriction until recently > in > 1.4.2_XX. > > Basically, previous to the fixes that allowed us to bail out midway > from a failed scavenge, scavenges would be "all or nothing". We _had_ to > be absolutely certain that all of the objects surviving a scavenge > could be accomodated in the old generation because we did not have > the means to "interrupt" a (failed) scavenge mid-way and revert to > full mark-sweep compact. The simplest way of guaranteeing that a scavenge > succeeded would be of guaranteeing that the old gen had a single > contiguous free block that could accomodate all of eden + survivor. > However, once we had the means of bailing from a failed scavenge > to a full mark-sweep compact, we could now relax that guarantee, > and instead try to _estimate_ with high probability when a scavenge > would succeed. (In our case, I think we took the simple route of > relying on the sweeper to estimate block demand correctly -- something > that we do fairly well when block size distribution is stationary and > there's sufficient space in the old generation to absorb occasional > volatility -- and somewhat optimistically assume that if there's > enough free space in the old generation that scavenges will > succeed. Of course, when these assumptions break down, as they sometimes > do, when there is not sufficient space in the old generation or > we fail in correctly estimating block demand, then scavenges do > fail and in that case we bail to full mark-sweep compact at some > considerable cost in terms of pause-time.) > > Hope that answered yr question. > -- ramki > > > > Regards, > > Neeraj > > > > On Tue, Aug 19, 2008 at 11:12 PM, Y Srinivas Ramakrishna < > > Y.S.Ramakrishna at sun.com> wrote: > > > > > > > > Hi Neeraj -- > > > > > > Good question; this has come up in another recent discussion on > > > this list. > > > > > > > I still have a query on your answer to question #2. As I understand, > > > > for a > > > > YG promotion to be successful GC needs a contiguous memory chunk > equal > > > > to > > > > the sum of sizes of Eden and From space which comes out to be approx. > > > > 230 MB > > > > in our case. The total free space in the "BinaryTreeDictionary" > > > containing > > > > larger blocks is only ~2.8 MB but the promotions are still > succeeding. > > > > > > > > My question is: *where the GC is getting the contiguous memory chunk > > > > of 230 > > > > MB from?* > > > > > > > > > That was indeed the old constraint in the early versions of 1.4.2, > > > which was fixed in some early version of 5.0 (if i recall correctly) > > so > > > that we could handle "promotion failure" (where mid-scavenge we > > > discover that we have no space in the old gen to promote a live > > > object from the young gen). That code was subsequently also backported > > > to some version of 1.4.2, probably 1.4.2_11 or later (i am not > > > sure precisely which version, but can find out if you really care). > > > As a result, (at least) in 1.4.2_17, in fact, there isn't such a > > > restriction, > > > and scavenges will occur even in the absence of the pessimal "full > > > promotion guarantee". > > > > > > -- ramki > > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080820/67d638d0/attachment.html