From moazam at apple.com Tue Nov 4 14:39:21 2008 From: moazam at apple.com (Moazam Raja) Date: Tue, 4 Nov 2008 14:39:21 -0800 Subject: Where have the Full GCs gone? Message-ID: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> Hi all, I'm running a test and recording GC information on a Tomcat application and have noticed that even after a few days, there are no 'Full GC' markers. Am I reading the log incorrectly, or are the Full GCs getting logged elsewhere? I'm using Java 1.5.0_13 on OS X with the following flags, -Xms=2048m -Xmx=2048m -server -XX:+UseConcMarkSweepGC -Xloggc:/var/tmp/GC.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintClassHistogram -XX:+PrintGCApplicationConcurrentTime A sample of the output from my GC log: Application time: 1.4105823 seconds 82558.187: [GC {Heap before gc invocations=111716: par new generation total 21184K, used 21120K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) eden space 21120K, 100% used [0x0000000107270000, 0x0000000108710000, 0x0000000108710000) from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) concurrent mark-sweep generation total 20950272K, used 5483440K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) 82558.187: [ParNew: 21120K->0K(21184K), 0.0669633 secs] 5504560K- >5487545K(20971456K)Heap after gc invocations=111717: par new generation total 21184K, used 0K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) eden space 21120K, 0% used [0x0000000107270000, 0x0000000107270000, 0x0000000108710000) from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) concurrent mark-sweep generation total 20950272K, used 5487545K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) } , 0.0672098 secs] Application time: 0.4661567 seconds 82558.721: [GC {Heap before gc invocations=111717: par new generation total 21184K, used 21120K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) eden space 21120K, 100% used [0x0000000107270000, 0x0000000108710000, 0x0000000108710000) from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) concurrent mark-sweep generation total 20950272K, used 5487545K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) 82558.721: [ParNew: 21120K->0K(21184K), 0.0591967 secs] 5508665K- >5491650K(20971456K)Heap after gc invocations=111718: par new generation total 21184K, used 0K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) eden space 21120K, 0% used [0x0000000107270000, 0x0000000107270000, 0x0000000108710000) from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) concurrent mark-sweep generation total 20950272K, used 5491650K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) } , 0.0594283 secs] Application time: 0.0590593 seconds 82558.840: [GC {Heap before gc invocations=111718: par new generation total 21184K, used 21120K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) eden space 21120K, 100% used [0x0000000107270000, 0x0000000108710000, 0x0000000108710000) from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) concurrent mark-sweep generation total 20950272K, used 5491650K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) 82558.840: [ParNew: 21120K->0K(21184K), 0.0589090 secs] 5512770K- >5496450K(20971456K)Heap after gc invocations=111719: par new generation total 21184K, used 0K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) eden space 21120K, 0% used [0x0000000107270000, 0x0000000107270000, 0x0000000108710000) from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) concurrent mark-sweep generation total 20950272K, used 5496450K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) } , 0.0592058 secs] -Moazam From Y.S.Ramakrishna at Sun.COM Tue Nov 4 14:49:56 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Tue, 04 Nov 2008 14:49:56 -0800 Subject: Where have the Full GCs gone? In-Reply-To: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> References: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> Message-ID: Hi Moazam -- With the CMS collector, look for GC lines tagged with the string "CMS". Those indicate CMS collection activity. Please refer to the GC tuning guide (under "low pause collector," for sample output. There may also be full stop-world collections because of, for example, promotion failure, which may not be explicitly called out as a scavenge bails to a full collection. In those cases, look for the string "fail". In more recent JVM's in addition to the "gc invocations" count (which you see in yr +PrintHeapAtGC output below), you will also see "full = ..." counts which will tell you the number of major cycles that happened. If you have jstat on OS X, then jstat -gc would also give you FGC and FGCT which are respectively the number of and time spent in major gc cycles (these counters however are if i recall correctly somewhat misleading in the case of CMS). -- ramki ----- Original Message ----- From: Moazam Raja Date: Tuesday, November 4, 2008 2:41 pm Subject: Where have the Full GCs gone? To: hotspot-gc-use at openjdk.java.net > Hi all, I'm running a test and recording GC information on a Tomcat > application and have noticed that even after a few days, there are no > > 'Full GC' markers. Am I reading the log incorrectly, or are the Full > > GCs getting logged elsewhere? > > I'm using Java 1.5.0_13 on OS X with the following flags, > > -Xms=2048m -Xmx=2048m > -server -XX:+UseConcMarkSweepGC > -Xloggc:/var/tmp/GC.log > -verbose:gc > -XX:+PrintGCDetails > -XX:+PrintHeapAtGC > -XX:+PrintClassHistogram > -XX:+PrintGCApplicationConcurrentTime > > > A sample of the output from my GC log: > > > Application time: 1.4105823 seconds > 82558.187: [GC {Heap before gc invocations=111716: > par new generation total 21184K, used 21120K [0x0000000107270000, > > 0x0000000108730000, 0x0000000108730000) > eden space 21120K, 100% used [0x0000000107270000, > 0x0000000108710000, 0x0000000108710000) > from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > > 0x0000000108720000) > to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > > 0x0000000108730000) > concurrent mark-sweep generation total 20950272K, used 5483440K > [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > concurrent-mark-sweep perm gen total 39296K, used 23575K > [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > 82558.187: [ParNew: 21120K->0K(21184K), 0.0669633 secs] 5504560K- > >5487545K(20971456K)Heap after gc invocations=111717: > par new generation total 21184K, used 0K [0x0000000107270000, > 0x0000000108730000, 0x0000000108730000) > eden space 21120K, 0% used [0x0000000107270000, > 0x0000000107270000, 0x0000000108710000) > from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > > 0x0000000108730000) > to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > > 0x0000000108720000) > concurrent mark-sweep generation total 20950272K, used 5487545K > [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > concurrent-mark-sweep perm gen total 39296K, used 23575K > [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > } > , 0.0672098 secs] > > Application time: 0.4661567 seconds > 82558.721: [GC {Heap before gc invocations=111717: > par new generation total 21184K, used 21120K [0x0000000107270000, > > 0x0000000108730000, 0x0000000108730000) > eden space 21120K, 100% used [0x0000000107270000, > 0x0000000108710000, 0x0000000108710000) > from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > > 0x0000000108730000) > to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > > 0x0000000108720000) > concurrent mark-sweep generation total 20950272K, used 5487545K > [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > concurrent-mark-sweep perm gen total 39296K, used 23575K > [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > 82558.721: [ParNew: 21120K->0K(21184K), 0.0591967 secs] 5508665K- > >5491650K(20971456K)Heap after gc invocations=111718: > par new generation total 21184K, used 0K [0x0000000107270000, > 0x0000000108730000, 0x0000000108730000) > eden space 21120K, 0% used [0x0000000107270000, > 0x0000000107270000, 0x0000000108710000) > from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > > 0x0000000108720000) > to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > > 0x0000000108730000) > concurrent mark-sweep generation total 20950272K, used 5491650K > [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > concurrent-mark-sweep perm gen total 39296K, used 23575K > [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > } > , 0.0594283 secs] > > Application time: 0.0590593 seconds > 82558.840: [GC {Heap before gc invocations=111718: > par new generation total 21184K, used 21120K [0x0000000107270000, > > 0x0000000108730000, 0x0000000108730000) > eden space 21120K, 100% used [0x0000000107270000, > 0x0000000108710000, 0x0000000108710000) > from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > > 0x0000000108720000) > to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > > 0x0000000108730000) > concurrent mark-sweep generation total 20950272K, used 5491650K > [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > concurrent-mark-sweep perm gen total 39296K, used 23575K > [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > 82558.840: [ParNew: 21120K->0K(21184K), 0.0589090 secs] 5512770K- > >5496450K(20971456K)Heap after gc invocations=111719: > par new generation total 21184K, used 0K [0x0000000107270000, > 0x0000000108730000, 0x0000000108730000) > eden space 21120K, 0% used [0x0000000107270000, > 0x0000000107270000, 0x0000000108710000) > from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > > 0x0000000108730000) > to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > > 0x0000000108720000) > concurrent mark-sweep generation total 20950272K, used 5496450K > [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > concurrent-mark-sweep perm gen total 39296K, used 23575K > [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > } > , 0.0592058 secs] > > > -Moazam > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From moazam at apple.com Tue Nov 4 15:26:14 2008 From: moazam at apple.com (Moazam Raja) Date: Tue, 4 Nov 2008 15:26:14 -0800 Subject: Where have the Full GCs gone? In-Reply-To: References: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> Message-ID: <6323F0D0-3FD0-4FB0-BF63-BF73546DE338@apple.com> Ramki, thanks! Do you know if there is a tool which I can feed this GC log into and get some graphs? -Moazam On Nov 4, 2008, at 2:49 PM, Y Srinivas Ramakrishna wrote: > Hi Moazam -- > > With the CMS collector, look for GC lines tagged with the > string "CMS". Those indicate CMS collection activity. > Please refer to the GC tuning guide (under "low pause collector," > for sample output. > > There may also be full stop-world collections because of, > for example, promotion failure, which may not be explicitly called > out as a scavenge bails to a full collection. In those cases, > look for the string "fail". > > In more recent JVM's in addition to the "gc invocations" count > (which you see in yr +PrintHeapAtGC output below), you will also > see "full = ..." counts which will tell you the number of major > cycles that happened. > > If you have jstat on OS X, then jstat -gc would also give you > FGC and FGCT which are respectively the number of and time spent > in major gc cycles (these counters however are if i recall correctly > somewhat misleading in the case of CMS). > > -- ramki > > ----- Original Message ----- > From: Moazam Raja > Date: Tuesday, November 4, 2008 2:41 pm > Subject: Where have the Full GCs gone? > To: hotspot-gc-use at openjdk.java.net > > >> Hi all, I'm running a test and recording GC information on a Tomcat >> application and have noticed that even after a few days, there are no >> >> 'Full GC' markers. Am I reading the log incorrectly, or are the Full >> >> GCs getting logged elsewhere? >> >> I'm using Java 1.5.0_13 on OS X with the following flags, >> >> -Xms=2048m -Xmx=2048m >> -server -XX:+UseConcMarkSweepGC >> -Xloggc:/var/tmp/GC.log >> -verbose:gc >> -XX:+PrintGCDetails >> -XX:+PrintHeapAtGC >> -XX:+PrintClassHistogram >> -XX:+PrintGCApplicationConcurrentTime >> >> >> A sample of the output from my GC log: >> >> >> Application time: 1.4105823 seconds >> 82558.187: [GC {Heap before gc invocations=111716: >> par new generation total 21184K, used 21120K [0x0000000107270000, >> >> 0x0000000108730000, 0x0000000108730000) >> eden space 21120K, 100% used [0x0000000107270000, >> 0x0000000108710000, 0x0000000108710000) >> from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, >> >> 0x0000000108720000) >> to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, >> >> 0x0000000108730000) >> concurrent mark-sweep generation total 20950272K, used 5483440K >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >> concurrent-mark-sweep perm gen total 39296K, used 23575K >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >> 82558.187: [ParNew: 21120K->0K(21184K), 0.0669633 secs] 5504560K- >>> 5487545K(20971456K)Heap after gc invocations=111717: >> par new generation total 21184K, used 0K [0x0000000107270000, >> 0x0000000108730000, 0x0000000108730000) >> eden space 21120K, 0% used [0x0000000107270000, >> 0x0000000107270000, 0x0000000108710000) >> from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, >> >> 0x0000000108730000) >> to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, >> >> 0x0000000108720000) >> concurrent mark-sweep generation total 20950272K, used 5487545K >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >> concurrent-mark-sweep perm gen total 39296K, used 23575K >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >> } >> , 0.0672098 secs] >> >> Application time: 0.4661567 seconds >> 82558.721: [GC {Heap before gc invocations=111717: >> par new generation total 21184K, used 21120K [0x0000000107270000, >> >> 0x0000000108730000, 0x0000000108730000) >> eden space 21120K, 100% used [0x0000000107270000, >> 0x0000000108710000, 0x0000000108710000) >> from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, >> >> 0x0000000108730000) >> to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, >> >> 0x0000000108720000) >> concurrent mark-sweep generation total 20950272K, used 5487545K >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >> concurrent-mark-sweep perm gen total 39296K, used 23575K >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >> 82558.721: [ParNew: 21120K->0K(21184K), 0.0591967 secs] 5508665K- >>> 5491650K(20971456K)Heap after gc invocations=111718: >> par new generation total 21184K, used 0K [0x0000000107270000, >> 0x0000000108730000, 0x0000000108730000) >> eden space 21120K, 0% used [0x0000000107270000, >> 0x0000000107270000, 0x0000000108710000) >> from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, >> >> 0x0000000108720000) >> to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, >> >> 0x0000000108730000) >> concurrent mark-sweep generation total 20950272K, used 5491650K >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >> concurrent-mark-sweep perm gen total 39296K, used 23575K >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >> } >> , 0.0594283 secs] >> >> Application time: 0.0590593 seconds >> 82558.840: [GC {Heap before gc invocations=111718: >> par new generation total 21184K, used 21120K [0x0000000107270000, >> >> 0x0000000108730000, 0x0000000108730000) >> eden space 21120K, 100% used [0x0000000107270000, >> 0x0000000108710000, 0x0000000108710000) >> from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, >> >> 0x0000000108720000) >> to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, >> >> 0x0000000108730000) >> concurrent mark-sweep generation total 20950272K, used 5491650K >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >> concurrent-mark-sweep perm gen total 39296K, used 23575K >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >> 82558.840: [ParNew: 21120K->0K(21184K), 0.0589090 secs] 5512770K- >>> 5496450K(20971456K)Heap after gc invocations=111719: >> par new generation total 21184K, used 0K [0x0000000107270000, >> 0x0000000108730000, 0x0000000108730000) >> eden space 21120K, 0% used [0x0000000107270000, >> 0x0000000107270000, 0x0000000108710000) >> from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, >> >> 0x0000000108730000) >> to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, >> >> 0x0000000108720000) >> concurrent mark-sweep generation total 20950272K, used 5496450K >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >> concurrent-mark-sweep perm gen total 39296K, used 23575K >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >> } >> , 0.0592058 secs] >> >> >> -Moazam >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Y.S.Ramakrishna at Sun.COM Tue Nov 4 15:45:25 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Tue, 04 Nov 2008 15:45:25 -0800 Subject: Where have the Full GCs gone? In-Reply-To: <6323F0D0-3FD0-4FB0-BF63-BF73546DE338@apple.com> References: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> <6323F0D0-3FD0-4FB0-BF63-BF73546DE338@apple.com> Message-ID: There's gchisto, a link to which was recently sent out on this list by Tony. Here's the link again:- https://gchisto.dev.java.net/ Google should give you a few other useful links, including some examples and user experiences. BTW, I am not sure if gchisto in its current form will accept gc logs that have PrintHeapAtGC information. You'll need to elide that information, or update GCHisto's parser to do so. Although jconsole does not have some of these capabilities, it has a subset and can be used to monitor a running jvm, so you should check it out to see if it suits your purposes. Finally, there's VisualVM ( https://visualvm.dev.java.net/ ) which might also have a suitable plugin, i am not sure though. (Then there's VisualGC, which I heard was going to be available as a plugin for VisualVM.) -- ramki ----- Original Message ----- From: Moazam Raja Date: Tuesday, November 4, 2008 3:26 pm Subject: Re: Where have the Full GCs gone? To: hotspot-gc-use at openjdk.java.net > Ramki, thanks! > > Do you know if there is a tool which I can feed this GC log into and > > get some graphs? > > -Moazam > > On Nov 4, 2008, at 2:49 PM, Y Srinivas Ramakrishna wrote: > > > Hi Moazam -- > > > > With the CMS collector, look for GC lines tagged with the > > string "CMS". Those indicate CMS collection activity. > > Please refer to the GC tuning guide (under "low pause collector," > > for sample output. > > > > There may also be full stop-world collections because of, > > for example, promotion failure, which may not be explicitly called > > out as a scavenge bails to a full collection. In those cases, > > look for the string "fail". > > > > In more recent JVM's in addition to the "gc invocations" count > > (which you see in yr +PrintHeapAtGC output below), you will also > > see "full = ..." counts which will tell you the number of major > > cycles that happened. > > > > If you have jstat on OS X, then jstat -gc would also give you > > FGC and FGCT which are respectively the number of and time spent > > in major gc cycles (these counters however are if i recall correctly > > somewhat misleading in the case of CMS). > > > > -- ramki > > > > ----- Original Message ----- > > From: Moazam Raja > > Date: Tuesday, November 4, 2008 2:41 pm > > Subject: Where have the Full GCs gone? > > To: hotspot-gc-use at openjdk.java.net > > > > > >> Hi all, I'm running a test and recording GC information on a Tomcat > >> application and have noticed that even after a few days, there are > no > >> > >> 'Full GC' markers. Am I reading the log incorrectly, or are the Full > >> > >> GCs getting logged elsewhere? > >> > >> I'm using Java 1.5.0_13 on OS X with the following flags, > >> > >> -Xms=2048m -Xmx=2048m > >> -server -XX:+UseConcMarkSweepGC > >> -Xloggc:/var/tmp/GC.log > >> -verbose:gc > >> -XX:+PrintGCDetails > >> -XX:+PrintHeapAtGC > >> -XX:+PrintClassHistogram > >> -XX:+PrintGCApplicationConcurrentTime > >> > >> > >> A sample of the output from my GC log: > >> > >> > >> Application time: 1.4105823 seconds > >> 82558.187: [GC {Heap before gc invocations=111716: > >> par new generation total 21184K, used 21120K [0x0000000107270000, > >> > >> 0x0000000108730000, 0x0000000108730000) > >> eden space 21120K, 100% used [0x0000000107270000, > >> 0x0000000108710000, 0x0000000108710000) > >> from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > >> > >> 0x0000000108720000) > >> to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > >> > >> 0x0000000108730000) > >> concurrent mark-sweep generation total 20950272K, used 5483440K > >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > >> concurrent-mark-sweep perm gen total 39296K, used 23575K > >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > >> 82558.187: [ParNew: 21120K->0K(21184K), 0.0669633 secs] 5504560K- > >>> 5487545K(20971456K)Heap after gc invocations=111717: > >> par new generation total 21184K, used 0K [0x0000000107270000, > >> 0x0000000108730000, 0x0000000108730000) > >> eden space 21120K, 0% used [0x0000000107270000, > >> 0x0000000107270000, 0x0000000108710000) > >> from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > >> > >> 0x0000000108730000) > >> to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > >> > >> 0x0000000108720000) > >> concurrent mark-sweep generation total 20950272K, used 5487545K > >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > >> concurrent-mark-sweep perm gen total 39296K, used 23575K > >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > >> } > >> , 0.0672098 secs] > >> > >> Application time: 0.4661567 seconds > >> 82558.721: [GC {Heap before gc invocations=111717: > >> par new generation total 21184K, used 21120K [0x0000000107270000, > >> > >> 0x0000000108730000, 0x0000000108730000) > >> eden space 21120K, 100% used [0x0000000107270000, > >> 0x0000000108710000, 0x0000000108710000) > >> from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > >> > >> 0x0000000108730000) > >> to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > >> > >> 0x0000000108720000) > >> concurrent mark-sweep generation total 20950272K, used 5487545K > >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > >> concurrent-mark-sweep perm gen total 39296K, used 23575K > >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > >> 82558.721: [ParNew: 21120K->0K(21184K), 0.0591967 secs] 5508665K- > >>> 5491650K(20971456K)Heap after gc invocations=111718: > >> par new generation total 21184K, used 0K [0x0000000107270000, > >> 0x0000000108730000, 0x0000000108730000) > >> eden space 21120K, 0% used [0x0000000107270000, > >> 0x0000000107270000, 0x0000000108710000) > >> from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > >> > >> 0x0000000108720000) > >> to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > >> > >> 0x0000000108730000) > >> concurrent mark-sweep generation total 20950272K, used 5491650K > >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > >> concurrent-mark-sweep perm gen total 39296K, used 23575K > >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > >> } > >> , 0.0594283 secs] > >> > >> Application time: 0.0590593 seconds > >> 82558.840: [GC {Heap before gc invocations=111718: > >> par new generation total 21184K, used 21120K [0x0000000107270000, > >> > >> 0x0000000108730000, 0x0000000108730000) > >> eden space 21120K, 100% used [0x0000000107270000, > >> 0x0000000108710000, 0x0000000108710000) > >> from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > >> > >> 0x0000000108720000) > >> to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > >> > >> 0x0000000108730000) > >> concurrent mark-sweep generation total 20950272K, used 5491650K > >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > >> concurrent-mark-sweep perm gen total 39296K, used 23575K > >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > >> 82558.840: [ParNew: 21120K->0K(21184K), 0.0589090 secs] 5512770K- > >>> 5496450K(20971456K)Heap after gc invocations=111719: > >> par new generation total 21184K, used 0K [0x0000000107270000, > >> 0x0000000108730000, 0x0000000108730000) > >> eden space 21120K, 0% used [0x0000000107270000, > >> 0x0000000107270000, 0x0000000108710000) > >> from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, > >> > >> 0x0000000108730000) > >> to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, > >> > >> 0x0000000108720000) > >> concurrent mark-sweep generation total 20950272K, used 5496450K > >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > >> concurrent-mark-sweep perm gen total 39296K, used 23575K > >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > >> } > >> , 0.0592058 secs] > >> > >> > >> -Moazam > >> > >> _______________________________________________ > >> hotspot-gc-use mailing list > >> hotspot-gc-use at openjdk.java.net > >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Peter.Kessler at Sun.COM Tue Nov 4 16:52:20 2008 From: Peter.Kessler at Sun.COM (Peter B. Kessler) Date: Tue, 04 Nov 2008 16:52:20 -0800 Subject: Where have the Full GCs gone? In-Reply-To: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> References: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> Message-ID: <4910EE44.7060704@Sun.COM> Moazam Raja wrote: > Hi all, I'm running a test and recording GC information on a Tomcat > application and have noticed that even after a few days, there are no > 'Full GC' markers. Am I reading the log incorrectly, or are the Full > GCs getting logged elsewhere? > > I'm using Java 1.5.0_13 on OS X with the following flags, > > -Xms=2048m -Xmx=2048m > -server -XX:+UseConcMarkSweepGC > -Xloggc:/var/tmp/GC.log > -verbose:gc > -XX:+PrintGCDetails > -XX:+PrintHeapAtGC > -XX:+PrintClassHistogram > -XX:+PrintGCApplicationConcurrentTime > > > A sample of the output from my GC log: > > > Application time: 1.4105823 seconds > 82558.187: [GC {Heap before gc invocations=111716: > par new generation total 21184K, used 21120K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) > eden space 21120K, 100% used [0x0000000107270000, 0x0000000108710000, 0x0000000108710000) > from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) > to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) > concurrent mark-sweep generation total 20950272K, used 5483440K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > 82558.187: [ParNew: 21120K->0K(21184K), 0.0669633 secs] 5504560K->5487545K(20971456K)Heap after gc invocations=111717: > par new generation total 21184K, used 0K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) > eden space 21120K, 0% used [0x0000000107270000, 0x0000000107270000, 0x0000000108710000) > from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) > to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) > concurrent mark-sweep generation total 20950272K, used 5487545K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) > concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) > } > , 0.0672098 secs] > ... I don't think those are the command line arguments for the GC log you show. It looks from the PrintHeapAtGG output that your heap is 20GB, not the 2GB shown on the command line. 20MB in the "par new" generation, and the reast of the 20GB in the CMS generation. It looks like you are using just over 5GB of CMS space, which would explain why you haven't seen an old generation collection yet. ... peter From moazam at apple.com Tue Nov 4 17:04:10 2008 From: moazam at apple.com (Moazam Raja) Date: Tue, 4 Nov 2008 17:04:10 -0800 Subject: Where have the Full GCs gone? In-Reply-To: <4910EE44.7060704@Sun.COM> References: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> <4910EE44.7060704@Sun.COM> Message-ID: <242C68AA-B22F-4C78-9866-60FE648C80BC@apple.com> Peter, this must be a bug in the GC output as I know for sure I don't have the memory to run a 20GB heap. Instead of 20950272K, it must mean 20950272Kb. This is on Java 1.5.0 on OS X Leopard. java version "1.5.0_13" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13- b05-237) Java HotSpot(TM) Server VM (build 1.5.0_13-119, mixed mode) -Moazam On Nov 4, 2008, at 4:52 PM, Peter B. Kessler wrote: > Moazam Raja wrote: >> Hi all, I'm running a test and recording GC information on a >> Tomcat application and have noticed that even after a few days, >> there are no 'Full GC' markers. Am I reading the log incorrectly, >> or are the Full GCs getting logged elsewhere? >> I'm using Java 1.5.0_13 on OS X with the following flags, >> -Xms=2048m -Xmx=2048m >> -server -XX:+UseConcMarkSweepGC >> -Xloggc:/var/tmp/GC.log >> -verbose:gc >> -XX:+PrintGCDetails >> -XX:+PrintHeapAtGC >> -XX:+PrintClassHistogram >> -XX:+PrintGCApplicationConcurrentTime >> A sample of the output from my GC log: >> Application time: 1.4105823 seconds >> 82558.187: [GC {Heap before gc invocations=111716: >> par new generation total 21184K, used 21120K >> [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) >> eden space 21120K, 100% used [0x0000000107270000, >> 0x0000000108710000, 0x0000000108710000) >> from space 64K, 0% used [0x0000000108710000, >> 0x0000000108710000, 0x0000000108720000) >> to space 64K, 0% used [0x0000000108720000, >> 0x0000000108720000, 0x0000000108730000) >> concurrent mark-sweep generation total 20950272K, used 5483440K >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >> concurrent-mark-sweep perm gen total 39296K, used 23575K >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >> 82558.187: [ParNew: 21120K->0K(21184K), 0.0669633 secs] 5504560K- >> >5487545K(20971456K)Heap after gc invocations=111717: >> par new generation total 21184K, used 0K [0x0000000107270000, >> 0x0000000108730000, 0x0000000108730000) >> eden space 21120K, 0% used [0x0000000107270000, >> 0x0000000107270000, 0x0000000108710000) >> from space 64K, 0% used [0x0000000108720000, >> 0x0000000108720000, 0x0000000108730000) >> to space 64K, 0% used [0x0000000108710000, >> 0x0000000108710000, 0x0000000108720000) >> concurrent mark-sweep generation total 20950272K, used 5487545K >> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >> concurrent-mark-sweep perm gen total 39296K, used 23575K >> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >> } >> , 0.0672098 secs] >> ... > > I don't think those are the command line arguments for the GC log > you show. It looks from the PrintHeapAtGG output that your heap is > 20GB, not the 2GB shown on the command line. 20MB in the "par new" > generation, and the reast of the 20GB in the CMS generation. > > It looks like you are using just over 5GB of CMS space, which would > explain why you haven't seen an old generation collection yet. > > ... peter -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20081104/bfaa8a15/attachment.html From moazam at apple.com Tue Nov 4 17:36:32 2008 From: moazam at apple.com (Moazam Raja) Date: Tue, 4 Nov 2008 17:36:32 -0800 Subject: Where have the Full GCs gone? In-Reply-To: References: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> <4910EE44.7060704@Sun.COM> <242C68AA-B22F-4C78-9866-60FE648C80BC@apple.com> Message-ID: Peter, you were right, once again. :) Indeed, the heap is set to 20GB. Not what I had expected! Thanks! -Moazam > ----- Original Message ----- > From: Moazam Raja > Date: Tuesday, November 4, 2008 5:04 pm > Subject: Re: Where have the Full GCs gone? > To: "Peter B. Kessler" > Cc: hotspot-gc-use at openjdk.java.net > > >> Peter, this must be a bug in the GC output as I know for sure I don't >> >> have the memory to run a 20GB heap. >> >> Instead of 20950272K, it must mean 20950272Kb. >> >> This is on Java 1.5.0 on OS X Leopard. >> >> java version "1.5.0_13" >> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13- >> b05-237) >> Java HotSpot(TM) Server VM (build 1.5.0_13-119, mixed mode) >> >> >> -Moazam >> >> On Nov 4, 2008, at 4:52 PM, Peter B. Kessler wrote: >> >>> Moazam Raja wrote: >>>> Hi all, I'm running a test and recording GC information on a >>>> Tomcat application and have noticed that even after a few days, >>>> there are no 'Full GC' markers. Am I reading the log incorrectly, >> >>>> or are the Full GCs getting logged elsewhere? >>>> I'm using Java 1.5.0_13 on OS X with the following flags, >>>> -Xms=2048m -Xmx=2048m >>>> -server -XX:+UseConcMarkSweepGC >>>> -Xloggc:/var/tmp/GC.log >>>> -verbose:gc >>>> -XX:+PrintGCDetails >>>> -XX:+PrintHeapAtGC >>>> -XX:+PrintClassHistogram >>>> -XX:+PrintGCApplicationConcurrentTime >>>> A sample of the output from my GC log: >>>> Application time: 1.4105823 seconds >>>> 82558.187: [GC {Heap before gc invocations=111716: >>>> par new generation total 21184K, used 21120K >>>> [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) >>>> eden space 21120K, 100% used [0x0000000107270000, >>>> 0x0000000108710000, 0x0000000108710000) >>>> from space 64K, 0% used [0x0000000108710000, >>>> 0x0000000108710000, 0x0000000108720000) >>>> to space 64K, 0% used [0x0000000108720000, >>>> 0x0000000108720000, 0x0000000108730000) >>>> concurrent mark-sweep generation total 20950272K, used 5483440K >>>> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >>>> concurrent-mark-sweep perm gen total 39296K, used 23575K >>>> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >>>> 82558.187: [ParNew: 21120K->0K(21184K), 0.0669633 secs] 5504560K- >>>>> 5487545K(20971456K)Heap after gc invocations=111717: >>>> par new generation total 21184K, used 0K [0x0000000107270000, >>>> 0x0000000108730000, 0x0000000108730000) >>>> eden space 21120K, 0% used [0x0000000107270000, >>>> 0x0000000107270000, 0x0000000108710000) >>>> from space 64K, 0% used [0x0000000108720000, >>>> 0x0000000108720000, 0x0000000108730000) >>>> to space 64K, 0% used [0x0000000108710000, >>>> 0x0000000108710000, 0x0000000108720000) >>>> concurrent mark-sweep generation total 20950272K, used 5487545K >>>> [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >>>> concurrent-mark-sweep perm gen total 39296K, used 23575K >>>> [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >>>> } >>>> , 0.0672098 secs] >>>> ... >>> >>> I don't think those are the command line arguments for the GC log >>> you show. It looks from the PrintHeapAtGG output that your heap is >> >>> 20GB, not the 2GB shown on the command line. 20MB in the "par new" >> >>> generation, and the reast of the 20GB in the CMS generation. >>> >>> It looks like you are using just over 5GB of CMS space, which would >> >>> explain why you haven't seen an old generation collection yet. >>> >>> ... peter >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Peter.Kessler at Sun.COM Tue Nov 4 17:38:01 2008 From: Peter.Kessler at Sun.COM (Peter B. Kessler) Date: Tue, 04 Nov 2008 17:38:01 -0800 Subject: Where have the Full GCs gone? In-Reply-To: <242C68AA-B22F-4C78-9866-60FE648C80BC@apple.com> References: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> <4910EE44.7060704@Sun.COM> <242C68AA-B22F-4C78-9866-60FE648C80BC@apple.com> Message-ID: <4910F8F9.90702@Sun.COM> You can *reserve* more virtual memory than you have physical memory. You just won't be happy when you exceed the amount of physical memory you have. How much physical memory do you have? If it's, say 8GB, you might not be to the point of paging, so things will look fine. But I suspect that at some point your application time and your GC times will rise as you need to page the heap to follow references. Usually that's a pretty steep cliff. This is Apple's port of JDK-1.5.0_13, right? I can't imagine that they changed the output format. ... peter Moazam Raja wrote: > Peter, this must be a bug in the GC output as I know for sure I don't > have the memory to run a 20GB heap. > > Instead of 20950272K, it must mean 20950272K*b*. > > This is on Java 1.5.0 on OS X Leopard. > > java version "1.5.0_13" > Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_13-b05-237) > Java HotSpot(TM) Server VM (build 1.5.0_13-119, mixed mode) > > > -Moazam > > On Nov 4, 2008, at 4:52 PM, Peter B. Kessler wrote: > >> Moazam Raja wrote: >>> Hi all, I'm running a test and recording GC information on a Tomcat >>> application and have noticed that even after a few days, there are >>> no 'Full GC' markers. Am I reading the log incorrectly, or are the >>> Full GCs getting logged elsewhere? >>> I'm using Java 1.5.0_13 on OS X with the following flags, >>> -Xms=2048m -Xmx=2048m >>> -server -XX:+UseConcMarkSweepGC >>> -Xloggc:/var/tmp/GC.log >>> -verbose:gc >>> -XX:+PrintGCDetails >>> -XX:+PrintHeapAtGC >>> -XX:+PrintClassHistogram >>> -XX:+PrintGCApplicationConcurrentTime >>> A sample of the output from my GC log: >>> Application time: 1.4105823 seconds >>> 82558.187: [GC {Heap before gc invocations=111716: >>> par new generation total 21184K, used 21120K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) >>> eden space 21120K, 100% used [0x0000000107270000, 0x0000000108710000, 0x0000000108710000) >>> from space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) >>> to space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) >>> concurrent mark-sweep generation total 20950272K, used 5483440K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >>> concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >>> 82558.187: [ParNew: 21120K->0K(21184K), 0.0669633 secs] 5504560K->5487545K(20971456K)Heap after gc invocations=111717: >>> par new generation total 21184K, used 0K [0x0000000107270000, 0x0000000108730000, 0x0000000108730000) >>> eden space 21120K, 0% used [0x0000000107270000, 0x0000000107270000, 0x0000000108710000) >>> from space 64K, 0% used [0x0000000108720000, 0x0000000108720000, 0x0000000108730000) >>> to space 64K, 0% used [0x0000000108710000, 0x0000000108710000, 0x0000000108720000) >>> concurrent mark-sweep generation total 20950272K, used 5487545K [0x0000000108730000, 0x0000000607270000, 0x0000000607270000) >>> concurrent-mark-sweep perm gen total 39296K, used 23575K [0x0000000607270000, 0x00000006098d0000, 0x000000060c670000) >>> } >>> , 0.0672098 secs] >>> ... >> >> I don't think those are the command line arguments for the GC log you >> show. It looks from the PrintHeapAtGG output that your heap is 20GB, >> not the 2GB shown on the command line. 20MB in the "par new" >> generation, and the reast of the 20GB in the CMS generation. >> >> It looks like you are using just over 5GB of CMS space, which would >> explain why you haven't seen an old generation collection yet. >> >> ... peter > From Antonios.Printezis at sun.com Tue Nov 4 21:49:46 2008 From: Antonios.Printezis at sun.com (Tony Printezis) Date: Wed, 05 Nov 2008 00:49:46 -0500 Subject: Where have the Full GCs gone? In-Reply-To: References: <0E7AF2AC-F167-41AB-9780-873DBF58DACD@apple.com> <6323F0D0-3FD0-4FB0-BF63-BF73546DE338@apple.com> Message-ID: <491133FA.8010304@sun.com> Y Srinivas Ramakrishna wrote: > BTW, I am not sure if gchisto in its current form will > accept gc logs that have PrintHeapAtGC information. You'll > need to elide that information, or update GCHisto's parser > to do so. > Ramki, Yes, I believe the GChisto parser does currently have trouble reading GC logs with PrintHeapAtGC information... Tony -- --------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS UBUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA 01803-2756, USA | --------------------------------------------------------------------- e-mail client: Thunderbird (Linux) From guanxiaohua at gmail.com Fri Nov 14 14:12:44 2008 From: guanxiaohua at gmail.com (Tony Guan) Date: Fri, 14 Nov 2008 16:12:44 -0600 Subject: Openjdk hotspot build 14.0-b06 hard coded bug found. Message-ID: <2fcb552b0811141412s5cb90c90od356f80b381fcbb5@mail.gmail.com> Hi there, I think I've found a little bug in the parallel gc codes. Will somebody take a look at it? Firstly, after a full gc, there maybe the need to adjust the boundary between young and old generation. and here in psMarkSweep.cpp, we use: heap->resize_old_gen(size_policy->calculated_old_free_size_in_bytes()); Then this methods will in turn call: gens()->adjust_boundary_for_old_gen_needs(desired_free_space); in this function, we compare the desired_free_space with the current free space, and then calls request_old_gen_expansion: if (old_gen()->free_in_bytes() < desired_free_space) { MutexLocker x(ExpandHeap_lock); request_old_gen_expansion(desired_free_space); but in request_old_gen_expansion the desired_free_space is immediately treated as expand_in_bytes. And in this implementation, the actual change in bytes is computed like this: size_t change_in_bytes = MIN3(young_gen_available, old_gen_available, align_size_up_(expand_in_bytes, alignment)); and then: virtual_spaces()->adjust_boundary_up(change_in_bytes) So in the end, we have old_gen()->free_in_bytes()+=desired_free_space and the final "desired_free_space" is more than needed. And after the boundary is moved up (too high), the _old_gen->resize(desired_free_space) will compute the new space size as: size_t new_size = used_in_bytes() + desired_free_space; Thus it will shrink the old gen space smaller. So we have part the space sacrificed by the young generation out of the old gen usage. And the size of this wasted memory is (desired_free_space- prevoiusly free space of old_gen). Thanks! Tony Guan From Jon.Masamitsu at Sun.COM Fri Nov 14 14:58:12 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 14 Nov 2008 14:58:12 -0800 Subject: Openjdk hotspot build 14.0-b06 hard coded bug found. In-Reply-To: <2fcb552b0811141412s5cb90c90od356f80b381fcbb5@mail.gmail.com> References: <2fcb552b0811141412s5cb90c90od356f80b381fcbb5@mail.gmail.com> Message-ID: <491E0284.4000500@sun.com> Tony Guan wrote On 11/14/08 14:12,: >Hi there, > >I think I've found a little bug in the parallel gc codes. Will >somebody take a look at it? >Firstly, after a full gc, there maybe the need to adjust the boundary >between young and old generation. >and here in psMarkSweep.cpp, we >use: heap->resize_old_gen(size_policy->calculated_old_free_size_in_bytes()); > >Then this methods will in turn call: > gens()->adjust_boundary_for_old_gen_needs(desired_free_space); > >in this function, we compare the desired_free_space with the current >free space, and then calls request_old_gen_expansion: > if (old_gen()->free_in_bytes() < desired_free_space) { > MutexLocker x(ExpandHeap_lock); > request_old_gen_expansion(desired_free_space); >but in request_old_gen_expansion the desired_free_space is immediately >treated as expand_in_bytes. And in this implementation, the actual > > Yes, you're right that this looks like a bug. Also something that doesn't look right here is the use of calculated_old_free_size_in_bytes(). It looks like it is adding room for 1 addtional young gen collection. Perhaps not a bad idea but I don't recall now why it is used instead of the straight promo_size(). >change in bytes is computed like this: > size_t change_in_bytes = MIN3(young_gen_available, > old_gen_available, > align_size_up_(expand_in_bytes, alignment)); >and then: > virtual_spaces()->adjust_boundary_up(change_in_bytes) > >So in the end, we have old_gen()->free_in_bytes()+=desired_free_space >and the final "desired_free_space" is more than needed. > >And after the boundary is moved up (too high), the >_old_gen->resize(desired_free_space) > >will compute the new space size as: > size_t new_size = used_in_bytes() + desired_free_space; > >Thus it will shrink the old gen space smaller. So we have part the >space sacrificed by the young generation out of the old gen usage. And >the size of this wasted memory is (desired_free_space- prevoiusly free >space of old_gen). > > Yup, we've moved the boundary farther than we really wanted to. Are you able to file a bug report for this? Did you want to fix it? > > From guanxiaohua at gmail.com Fri Nov 14 16:00:46 2008 From: guanxiaohua at gmail.com (Tony Guan) Date: Fri, 14 Nov 2008 18:00:46 -0600 Subject: Openjdk hotspot build 14.0-b06 hard coded bug found. In-Reply-To: <491E0284.4000500@sun.com> References: <2fcb552b0811141412s5cb90c90od356f80b381fcbb5@mail.gmail.com> <491E0284.4000500@sun.com> Message-ID: <2fcb552b0811141600l19a65775ma9a083971cd57553@mail.gmail.com> On Fri, Nov 14, 2008 at 4:58 PM, Jon Masamitsu wrote: > Tony Guan wrote On 11/14/08 14:12,: > >>Hi there, >> >>I think I've found a little bug in the parallel gc codes. Will >>somebody take a look at it? >>Firstly, after a full gc, there maybe the need to adjust the boundary >>between young and old generation. >>and here in psMarkSweep.cpp, we >>use: heap->resize_old_gen(size_policy->calculated_old_free_size_in_bytes()); >> >>Then this methods will in turn call: >> gens()->adjust_boundary_for_old_gen_needs(desired_free_space); >> >>in this function, we compare the desired_free_space with the current >>free space, and then calls request_old_gen_expansion: >> if (old_gen()->free_in_bytes() < desired_free_space) { >> MutexLocker x(ExpandHeap_lock); >> request_old_gen_expansion(desired_free_space); >>but in request_old_gen_expansion the desired_free_space is immediately >>treated as expand_in_bytes. And in this implementation, the actual >> >> > > Yes, you're right that this looks like a bug. > > Also something that doesn't look right here is the use of > calculated_old_free_size_in_bytes(). It looks like it is adding > room for 1 addtional young gen collection. Perhaps not a > bad idea but I don't recall now why it is used instead of the > straight promo_size(). In my opinion, adding room for 1 additional young gen collection is OK, because in adjustable parallel gc, by doing this, we can reduce the chance of doing full gc for next time. And that's why it's called desired_free_space. (just a guess) > > >>change in bytes is computed like this: >> size_t change_in_bytes = MIN3(young_gen_available, >> old_gen_available, >> align_size_up_(expand_in_bytes, alignment)); >>and then: >> virtual_spaces()->adjust_boundary_up(change_in_bytes) >> >>So in the end, we have old_gen()->free_in_bytes()+=desired_free_space >>and the final "desired_free_space" is more than needed. >> >>And after the boundary is moved up (too high), the >>_old_gen->resize(desired_free_space) >> >>will compute the new space size as: >> size_t new_size = used_in_bytes() + desired_free_space; >> >>Thus it will shrink the old gen space smaller. So we have part the >>space sacrificed by the young generation out of the old gen usage. And >>the size of this wasted memory is (desired_free_space- prevoiusly free >>space of old_gen). >> >> > > Yup, we've moved the boundary farther than we > really wanted to. > > Are you able to file a bug report for this? Did you want > to fix it? Is there any format should I follow to report the bug? In fact it's simple to fix it. just change in void AdjoiningGenerations::adjust_boundary_for_old_gen_needs() the following change will solve it: -request_old_gen_expansion(desired_free_space); +request_old_gen_expansion(desired_free_space-old_gen()->free_in_bytes()); >> >> > > From Jon.Masamitsu at Sun.COM Fri Nov 14 16:53:46 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 14 Nov 2008 16:53:46 -0800 Subject: Openjdk hotspot build 14.0-b06 hard coded bug found. In-Reply-To: <2fcb552b0811141600l19a65775ma9a083971cd57553@mail.gmail.com> References: <2fcb552b0811141412s5cb90c90od356f80b381fcbb5@mail.gmail.com> <491E0284.4000500@sun.com> <2fcb552b0811141600l19a65775ma9a083971cd57553@mail.gmail.com> Message-ID: <491E1D9A.2040208@sun.com> Inline replies Tony Guan wrote On 11/14/08 16:00,: >On Fri, Nov 14, 2008 at 4:58 PM, Jon Masamitsu wrote: > > >>Tony Guan wrote On 11/14/08 14:12,: >> >> >> >>>Hi there, >>> >>>I think I've found a little bug in the parallel gc codes. Will >>>somebody take a look at it? >>>Firstly, after a full gc, there maybe the need to adjust the boundary >>>between young and old generation. >>>and here in psMarkSweep.cpp, we >>>use: heap->resize_old_gen(size_policy->calculated_old_free_size_in_bytes()); >>> >>>Then this methods will in turn call: >>> gens()->adjust_boundary_for_old_gen_needs(desired_free_space); >>> >>>in this function, we compare the desired_free_space with the current >>>free space, and then calls request_old_gen_expansion: >>> if (old_gen()->free_in_bytes() < desired_free_space) { >>> MutexLocker x(ExpandHeap_lock); >>> request_old_gen_expansion(desired_free_space); >>>but in request_old_gen_expansion the desired_free_space is immediately >>>treated as expand_in_bytes. And in this implementation, the actual >>> >>> >>> >>> >>Yes, you're right that this looks like a bug. >> >>Also something that doesn't look right here is the use of >>calculated_old_free_size_in_bytes(). It looks like it is adding >>room for 1 addtional young gen collection. Perhaps not a >>bad idea but I don't recall now why it is used instead of the >>straight promo_size(). >> >> >In my opinion, adding room for 1 additional young gen collection is >OK, because in adjustable parallel gc, by doing this, we can reduce >the chance of doing full gc for next time. And that's why it's called >desired_free_space. (just a guess) > > I think it's odd because "desire_free_size" should have any such additional space built into it. "desired_free_size" is set in a method that purports to calculate the correct amount of free space for the old generation yet the use of calculated_old_free_size_in_bytes() bumps that up just a bit. Probably doesn't hurt but it's a policy decision that should be embedded in the calculation of "desired_free_size". > > > >> >> >>>change in bytes is computed like this: >>>size_t change_in_bytes = MIN3(young_gen_available, >>> old_gen_available, >>> align_size_up_(expand_in_bytes, alignment)); >>>and then: >>>virtual_spaces()->adjust_boundary_up(change_in_bytes) >>> >>>So in the end, we have old_gen()->free_in_bytes()+=desired_free_space >>>and the final "desired_free_space" is more than needed. >>> >>>And after the boundary is moved up (too high), the >>>_old_gen->resize(desired_free_space) >>> >>>will compute the new space size as: >>>size_t new_size = used_in_bytes() + desired_free_space; >>> >>>Thus it will shrink the old gen space smaller. So we have part the >>>space sacrificed by the young generation out of the old gen usage. And >>>the size of this wasted memory is (desired_free_space- prevoiusly free >>>space of old_gen). >>> >>> >>> >>> >>Yup, we've moved the boundary farther than we >>really wanted to. >> >>Are you able to file a bug report for this? Did you want >>to fix it? >> >> > >Is there any format should I follow to report the bug? > > In the description section if you just included the info you put in your mail, that would be fine. You can even cut-and-paste you mail into it. >In fact it's simple to fix it. just change >in >void AdjoiningGenerations::adjust_boundary_for_old_gen_needs() >the following change will solve it: > >-request_old_gen_expansion(desired_free_space); >+request_old_gen_expansion(desired_free_space-old_gen()->free_in_bytes()); > > > Yes, it's a straight forward change and so is a nice one if you want to try you hand at a contribution. > > > >>> >>> >> >> From guanxiaohua at gmail.com Fri Nov 14 22:49:36 2008 From: guanxiaohua at gmail.com (Tony Guan) Date: Sat, 15 Nov 2008 00:49:36 -0600 Subject: Openjdk hotspot build 14.0-b06 hard coded bug found. In-Reply-To: <491E1D9A.2040208@sun.com> References: <2fcb552b0811141412s5cb90c90od356f80b381fcbb5@mail.gmail.com> <491E0284.4000500@sun.com> <2fcb552b0811141600l19a65775ma9a083971cd57553@mail.gmail.com> <491E1D9A.2040208@sun.com> Message-ID: <2fcb552b0811142249m6da521cdsbc16a1a05fad0234@mail.gmail.com> Hi Jon, I've filed a bug report for it. The Review ID is 1390360. And I am not sure about the calculated_old_free_size_in_bytes() part, so I didn't included your comments. I can look further into the code in the coming days. Thanks! Tony On Fri, Nov 14, 2008 at 6:53 PM, Jon Masamitsu wrote: > Inline replies > > Tony Guan wrote On 11/14/08 16:00,: > >>On Fri, Nov 14, 2008 at 4:58 PM, Jon Masamitsu wrote: >> >> >>>Tony Guan wrote On 11/14/08 14:12,: >>> >>> >>> >>>>Hi there, >>>> >>>>I think I've found a little bug in the parallel gc codes. Will >>>>somebody take a look at it? >>>>Firstly, after a full gc, there maybe the need to adjust the boundary >>>>between young and old generation. >>>>and here in psMarkSweep.cpp, we >>>>use: heap->resize_old_gen(size_policy->calculated_old_free_size_in_bytes()); >>>> >>>>Then this methods will in turn call: >>>> gens()->adjust_boundary_for_old_gen_needs(desired_free_space); >>>> >>>>in this function, we compare the desired_free_space with the current >>>>free space, and then calls request_old_gen_expansion: >>>> if (old_gen()->free_in_bytes() < desired_free_space) { >>>> MutexLocker x(ExpandHeap_lock); >>>> request_old_gen_expansion(desired_free_space); >>>>but in request_old_gen_expansion the desired_free_space is immediately >>>>treated as expand_in_bytes. And in this implementation, the actual >>>> >>>> >>>> >>>> >>>Yes, you're right that this looks like a bug. >>> >>>Also something that doesn't look right here is the use of >>>calculated_old_free_size_in_bytes(). It looks like it is adding >>>room for 1 addtional young gen collection. Perhaps not a >>>bad idea but I don't recall now why it is used instead of the >>>straight promo_size(). >>> >>> >>In my opinion, adding room for 1 additional young gen collection is >>OK, because in adjustable parallel gc, by doing this, we can reduce >>the chance of doing full gc for next time. And that's why it's called >>desired_free_space. (just a guess) >> >> > > I think it's odd because "desire_free_size" should have any such > additional space built into it. "desired_free_size" is set in a > method that purports to calculate the correct amount of free > space for the old generation yet the use of > calculated_old_free_size_in_bytes() bumps that up just a bit. > Probably doesn't hurt but it's a policy decision that should > be embedded in the calculation of "desired_free_size". > >> >> >> >>> >>> >>>>change in bytes is computed like this: >>>>size_t change_in_bytes = MIN3(young_gen_available, >>>> old_gen_available, >>>> align_size_up_(expand_in_bytes, alignment)); >>>>and then: >>>>virtual_spaces()->adjust_boundary_up(change_in_bytes) >>>> >>>>So in the end, we have old_gen()->free_in_bytes()+=desired_free_space >>>>and the final "desired_free_space" is more than needed. >>>> >>>>And after the boundary is moved up (too high), the >>>>_old_gen->resize(desired_free_space) >>>> >>>>will compute the new space size as: >>>>size_t new_size = used_in_bytes() + desired_free_space; >>>> >>>>Thus it will shrink the old gen space smaller. So we have part the >>>>space sacrificed by the young generation out of the old gen usage. And >>>>the size of this wasted memory is (desired_free_space- prevoiusly free >>>>space of old_gen). >>>> >>>> >>>> >>>> >>>Yup, we've moved the boundary farther than we >>>really wanted to. >>> >>>Are you able to file a bug report for this? Did you want >>>to fix it? >>> >>> >> >>Is there any format should I follow to report the bug? >> >> > In the description section if you just included the info you > put in your mail, that would be fine. You can > even cut-and-paste you mail into it. > >>In fact it's simple to fix it. just change >>in >>void AdjoiningGenerations::adjust_boundary_for_old_gen_needs() >>the following change will solve it: >> >>-request_old_gen_expansion(desired_free_space); >>+request_old_gen_expansion(desired_free_space-old_gen()->free_in_bytes()); >> >> >> > > Yes, it's a straight forward change and so is a nice one if you > want to try you hand at a contribution. > >> >> >> >>>> >>>> >>> >>> > > From Jon.Masamitsu at Sun.COM Mon Nov 17 05:53:56 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 17 Nov 2008 05:53:56 -0800 Subject: Openjdk hotspot build 14.0-b06 hard coded bug found. In-Reply-To: <2fcb552b0811142249m6da521cdsbc16a1a05fad0234@mail.gmail.com> References: <2fcb552b0811141412s5cb90c90od356f80b381fcbb5@mail.gmail.com> <491E0284.4000500@sun.com> <2fcb552b0811141600l19a65775ma9a083971cd57553@mail.gmail.com> <491E1D9A.2040208@sun.com> <2fcb552b0811142249m6da521cdsbc16a1a05fad0234@mail.gmail.com> Message-ID: <49217774.4070601@sun.com> Tony, Have you signed a Sun contributor agreement? http://www.sun.com/software/opensource/contributor_agreement.jsp Jon Tony Guan wrote On 11/14/08 22:49,: >Hi Jon, > >I've filed a bug report for it. The Review ID is 1390360. >And I am not sure about the calculated_old_free_size_in_bytes() part, >so I didn't included your comments. I can look further into the code >in the coming days. > >Thanks! > >Tony > >On Fri, Nov 14, 2008 at 6:53 PM, Jon Masamitsu wrote: > > >>Inline replies >> >>Tony Guan wrote On 11/14/08 16:00,: >> >> >> >>>On Fri, Nov 14, 2008 at 4:58 PM, Jon Masamitsu wrote: >>> >>> >>> >>> >>>>Tony Guan wrote On 11/14/08 14:12,: >>>> >>>> >>>> >>>> >>>> >>>>>Hi there, >>>>> >>>>>I think I've found a little bug in the parallel gc codes. Will >>>>>somebody take a look at it? >>>>>Firstly, after a full gc, there maybe the need to adjust the boundary >>>>>between young and old generation. >>>>>and here in psMarkSweep.cpp, we >>>>>use: heap->resize_old_gen(size_policy->calculated_old_free_size_in_bytes()); >>>>> >>>>>Then this methods will in turn call: >>>>> gens()->adjust_boundary_for_old_gen_needs(desired_free_space); >>>>> >>>>>in this function, we compare the desired_free_space with the current >>>>>free space, and then calls request_old_gen_expansion: >>>>> if (old_gen()->free_in_bytes() < desired_free_space) { >>>>> MutexLocker x(ExpandHeap_lock); >>>>> request_old_gen_expansion(desired_free_space); >>>>>but in request_old_gen_expansion the desired_free_space is immediately >>>>>treated as expand_in_bytes. And in this implementation, the actual >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>Yes, you're right that this looks like a bug. >>>> >>>>Also something that doesn't look right here is the use of >>>>calculated_old_free_size_in_bytes(). It looks like it is adding >>>>room for 1 addtional young gen collection. Perhaps not a >>>>bad idea but I don't recall now why it is used instead of the >>>>straight promo_size(). >>>> >>>> >>>> >>>> >>>In my opinion, adding room for 1 additional young gen collection is >>>OK, because in adjustable parallel gc, by doing this, we can reduce >>>the chance of doing full gc for next time. And that's why it's called >>>desired_free_space. (just a guess) >>> >>> >>> >>> >>I think it's odd because "desire_free_size" should have any such >>additional space built into it. "desired_free_size" is set in a >>method that purports to calculate the correct amount of free >>space for the old generation yet the use of >>calculated_old_free_size_in_bytes() bumps that up just a bit. >>Probably doesn't hurt but it's a policy decision that should >>be embedded in the calculation of "desired_free_size". >> >> >> >>> >>> >>> >>>> >>>> >>>>>change in bytes is computed like this: >>>>>size_t change_in_bytes = MIN3(young_gen_available, >>>>> old_gen_available, >>>>> align_size_up_(expand_in_bytes, alignment)); >>>>>and then: >>>>>virtual_spaces()->adjust_boundary_up(change_in_bytes) >>>>> >>>>>So in the end, we have old_gen()->free_in_bytes()+=desired_free_space >>>>>and the final "desired_free_space" is more than needed. >>>>> >>>>>And after the boundary is moved up (too high), the >>>>>_old_gen->resize(desired_free_space) >>>>> >>>>>will compute the new space size as: >>>>>size_t new_size = used_in_bytes() + desired_free_space; >>>>> >>>>>Thus it will shrink the old gen space smaller. So we have part the >>>>>space sacrificed by the young generation out of the old gen usage. And >>>>>the size of this wasted memory is (desired_free_space- prevoiusly free >>>>>space of old_gen). >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>Yup, we've moved the boundary farther than we >>>>really wanted to. >>>> >>>>Are you able to file a bug report for this? Did you want >>>>to fix it? >>>> >>>> >>>> >>>> >>>Is there any format should I follow to report the bug? >>> >>> >>> >>> >>In the description section if you just included the info you >>put in your mail, that would be fine. You can >>even cut-and-paste you mail into it. >> >> >> >>>In fact it's simple to fix it. just change >>>in >>>void AdjoiningGenerations::adjust_boundary_for_old_gen_needs() >>>the following change will solve it: >>> >>>-request_old_gen_expansion(desired_free_space); >>>+request_old_gen_expansion(desired_free_space-old_gen()->free_in_bytes()); >>> >>> >>> >>> >>> >>Yes, it's a straight forward change and so is a nice one if you >>want to try you hand at a contribution. >> >> >> >>> >>> >>> >>>>> >>>>> >>>> >>>> >> >>