From Bond.Chen at lombardrisk.com Fri Jun 1 02:48:05 2012 From: Bond.Chen at lombardrisk.com (Bond Chen) Date: Fri, 01 Jun 2012 10:48:05 +0100 Subject: Free space calculation of heap Message-ID: <4FC90055.9AAE.00F7.0@lombardrisk.com> Hi, Our application have suffer a bad fragmentation issue at production env, I need switch on lots of parameter to analysis the gc, but I'm not fully understand the all of the output. By adding parameter -XX:PrintFLSStatistics=2 , The BinaryTreeDictionary and IndexedFreeLists of old gen and perm gen have been printed out q1: the total free space + used space of old gen calculation from gc output is 128KB less than actual old gen capacity q2: all counter of the BinaryTreeDictionary and IndexedFreeLists of perm gen is zero, why? q3: what's the meaning of " frag=0.0045", fragmentation ratio? and can someone provide the formula? q4: which parameter enables the "output log2 of gc log"? how to interpret the output /**output log1 of gc log*/ 67.016: [ParNew Desired survivor size 111411200 bytes, new threshold 2 (max 32) - age 1: 42577240 bytes, 42577240 total - age 2: 74417368 bytes, 116994608 total : 1512523K->153985K(1523200K), 0.2866570 secs] 1649586K->351496K(6000128K)After GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 546527717 Max Chunk Size: 546527717 Number of Blocks: 1 Av. Block Size: 546527717 Tree Height: 1 Statistics for IndexedFreeLists: -------------------------------- Total Free Space: 1221215 Max Chunk Size: 256 Number of Blocks: 39302 Av. Block Size: 31 free=547748932 frag=0.0045 After GC: Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 0 Max Chunk Size: 0 Number of Blocks: 0 Tree Height: 0 Statistics for IndexedFreeLists: -------------------------------- Total Free Space: 0 Max Chunk Size: 0 Number of Blocks: 0 free=0 frag=0.0000 , 0.2868167 secs] [Times: user=0.91 sys=0.13, real=0.29 secs] Heap after GC invocations=6 (full 1): par new generation total 1523200K, used 153985K [0xfffffd7e5ec10000, 0xfffffd7ec9010000, 0xfffffd7ec9010000) eden space 1305600K, 0% used [0xfffffd7e5ec10000, 0xfffffd7e5ec10000, 0xfffffd7eae710000) from space 217600K, 70% used [0xfffffd7eae710000, 0xfffffd7eb7d70438, 0xfffffd7ebbb90000) to space 217600K, 0% used [0xfffffd7ebbb90000, 0xfffffd7ebbb90000, 0xfffffd7ec9010000) concurrent mark-sweep generation total 4476928K, used 197511K [0xfffffd7ec9010000, 0xfffffd7fda410000, 0xfffffd7fda410000) concurrent-mark-sweep perm gen total 524288K, used 238680K [0xfffffd7fda410000, 0xfffffd7ffa410000, 0xfffffd7ffa410000) } /***/ /**output log2 of gc log**/ 2012-05-31T15:23:18.293+0800: 9.731: [CMS-concurrent-sweep-start] size[1] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[2] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[3] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[4] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[5] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 ............................... size[250] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[251] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[252] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[253] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[254] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[255] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 size[256] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: 0.000000, old_desired: 0, new_desired: 0 demand: 1, old_rate: 0.000000, current_rate: 0.103071, new_rate: 0.103071, old_desired: 0, new_desired: 0 ***/ Many Thanks Bond This e-mail together with any attachments (the "Message") is confidential and may contain privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this Message from your system. Any unauthorized copying, disclosure, distribution or use of this Message is strictly forbidden. From ysr1729 at gmail.com Fri Jun 1 12:10:55 2012 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 1 Jun 2012 12:10:55 -0700 Subject: Free space calculation of heap In-Reply-To: <4FC90055.9AAE.00F7.0@lombardrisk.com> References: <4FC90055.9AAE.00F7.0@lombardrisk.com> Message-ID: Hi Bond -- some partial answers and comments inline below:- On Fri, Jun 1, 2012 at 2:48 AM, Bond Chen wrote: > Hi, > > Our application have suffer a bad fragmentation issue at production env, I > need switch on lots of parameter to analysis the gc, but I'm not fully > understand the all of the output. > > By adding parameter -XX:PrintFLSStatistics=2 , The BinaryTreeDictionary > and IndexedFreeLists of old gen and perm gen have been printed out > > q1: the total free space + used space of old gen calculation from gc > output is 128KB less than actual old gen capacity > Can you elaborate by pointing out the specific line(s) at which you are seeing this issue? (For example, are you adding the free space reported in the BTD and the IFL, and the used space of old gen reported in the PrintHeapAtGC output, and comparing that to the Old Gen capacity reported there and finding the difference?) I'll reproduce those numbers below:- Free in BTD: 546527717 Free in IFL: 1221215 ----------------- Total Free: 547748932 heap words = (547748932 * 8 /1024) KB = 4279289 KB As reported in PrintHeapAtGC: total 4476928K, used 197511K >From above line, computed free space = (4476928 - 197511) KB = 4279417 KB "Lost" free space = (4279417 - 4279289) KB = 128 KB OK, given that i arrived at the number you reported as missing, I guess I understood yr question correctly. This is interesting. It's been a great while since I looked at this code, but I know that CMS uses part of the old gen space to store certain book-keeping structures associated with promotion ("promo info" and certain header word "spooling buffers") in the old gen heap. It allocates these buffers out of the old gen eagerly and doesn't return them unless we completely run out of space. (This policy should probably be revised, so it lazily returns free buffers.) However, it was my understanding that these buffers are reported as part of the used space. May be the book-keeping for space used doesn't take these into account, in which case I am surprised. Probably worth poking around in the code to see what's going on. I'll try and do that if I get some time, unless someone from the JVM team gets to it sooner, especially those who may have poked around in that code recently may be... > q2: all counter of the BinaryTreeDictionary and IndexedFreeLists of perm > gen is zero, why? > The perm gen allocation patterns were sufficiently simple that a linearly allocated single slab of space was used instead of free lists, with the idea that the "holes" from freeing up of class objects in the perm gen would not reclaim much usable space. This is probably worth revising with all of the perm gen objects that will be constantly created and reclaimed with dynamic languages running atop the JVM. Jon may recall the details of why it was done that way. Much of this will anyway change under the new perm gen allocation regime, where the perm gen has been moved out of the "Java heap" into the "native heap", so probably not worth worrying about any more. > > q3: what's the meaning of " frag=0.0045", fragmentation ratio? and can > someone provide the formula? > It's a normalized number between 0 and 1, with 1 indicating "maximal fragmentation" and 0 indicating no fragmentation. It's computed as 1 - (Sum(b_1^2)/(Sum(b_i))^2) where the sum is over all free blocks i, with block i of size b_i . > > q4: which parameter enables the "output log2 of gc log"? how to interpret > the output > The output lists, for each free list size in the "small size" range (i.e. block sizes through 256 heapwords), the demand-rate for that block size, lists the old, and current rate, then based on that estimates how many blocks it needs to keep for that size. The sweep then makes coalescing decisions based on the estimated demand for the blocks under question. I believe it's produced from PrintFLSCensus or something like that? -- ramki > /**output log1 of gc log*/ > > 67.016: [ParNew > Desired survivor size 111411200 bytes, new threshold 2 (max 32) > - age 1: 42577240 bytes, 42577240 total > - age 2: 74417368 bytes, 116994608 total > : 1512523K->153985K(1523200K), 0.2866570 secs] > 1649586K->351496K(6000128K)After GC: > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 546527717 > Max Chunk Size: 546527717 > Number of Blocks: 1 > Av. Block Size: 546527717 > Tree Height: 1 > Statistics for IndexedFreeLists: > -------------------------------- > Total Free Space: 1221215 > Max Chunk Size: 256 > Number of Blocks: 39302 > Av. Block Size: 31 > free=547748932 frag=0.0045 > After GC: > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 0 > Max Chunk Size: 0 > Number of Blocks: 0 > Tree Height: 0 > Statistics for IndexedFreeLists: > -------------------------------- > Total Free Space: 0 > Max Chunk Size: 0 > Number of Blocks: 0 > free=0 frag=0.0000 > , 0.2868167 secs] [Times: user=0.91 sys=0.13, real=0.29 secs] > Heap after GC invocations=6 (full 1): > par new generation total 1523200K, used 153985K [0xfffffd7e5ec10000, > 0xfffffd7ec9010000, 0xfffffd7ec9010000) > eden space 1305600K, 0% used [0xfffffd7e5ec10000, 0xfffffd7e5ec10000, > 0xfffffd7eae710000) > from space 217600K, 70% used [0xfffffd7eae710000, 0xfffffd7eb7d70438, > 0xfffffd7ebbb90000) > to space 217600K, 0% used [0xfffffd7ebbb90000, 0xfffffd7ebbb90000, > 0xfffffd7ec9010000) > concurrent mark-sweep generation total 4476928K, used 197511K > [0xfffffd7ec9010000, 0xfffffd7fda410000, 0xfffffd7fda410000) > concurrent-mark-sweep perm gen total 524288K, used 238680K > [0xfffffd7fda410000, 0xfffffd7ffa410000, 0xfffffd7ffa410000) > } > > > /***/ > > /**output log2 of gc log**/ > 2012-05-31T15:23:18.293+0800: 9.731: [CMS-concurrent-sweep-start] > size[1] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > size[2] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > size[3] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > size[4] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > size[5] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > ............................... > size[250] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[251] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[252] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[253] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[254] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[255] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[256] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > demand: 1, old_rate: 0.000000, current_rate: 0.103071, new_rate: 0.103071, > old_desired: 0, new_desired: 0 > > ***/ > > Many Thanks > Bond > > > This e-mail together with any attachments (the "Message") is confidential > and may contain privileged information. If you are not the intended > recipient (or have received this e-mail in error) please notify the sender > immediately and delete this Message from your system. Any unauthorized > copying, disclosure, distribution or use of this Message is strictly > forbidden. > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120601/6ca738a0/attachment.html From jon.masamitsu at oracle.com Fri Jun 1 14:02:22 2012 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Fri, 01 Jun 2012 14:02:22 -0700 Subject: Free space calculation of heap In-Reply-To: References: <4FC90055.9AAE.00F7.0@lombardrisk.com> Message-ID: <4FC92DDE.5030107@oracle.com> On 6/1/2012 12:10 PM, Srinivas Ramakrishna wrote: > ... >> q2: all counter of the BinaryTreeDictionary and IndexedFreeLists of perm >> gen is zero, why? >> > The perm gen allocation patterns were sufficiently simple that a linearly > allocated single slab of space > was used instead of free lists, with the idea that the "holes" from freeing > up of class objects in the perm > gen would not reclaim much usable space. This is probably worth revising > with all of the perm gen objects > that will be constantly created and reclaimed with dynamic languages > running atop the JVM. Jon may > recall the details of why it was done that way. Much of this will anyway > change under the new perm gen > allocation regime, where the perm gen has been moved out of the "Java heap" > into the "native heap", so > probably not worth worrying about any more. I recall that the perm gen used the linear allocation blocks without adaptive free lists. I would have thought that would still produce lists of free blocks. CMSPermGen does use the compactibleFreeListSpace allocate(). Part of the reason the linear allocation block is used to keep the needed left to right allocation order of the klasses in the perm gen. Jon > From ysr1729 at gmail.com Fri Jun 1 16:33:10 2012 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 1 Jun 2012 16:33:10 -0700 Subject: Free space calculation of heap In-Reply-To: <4FC92DDE.5030107@oracle.com> References: <4FC90055.9AAE.00F7.0@lombardrisk.com> <4FC92DDE.5030107@oracle.com> Message-ID: On Fri, Jun 1, 2012 at 2:02 PM, Jon Masamitsu wrote: > > > On 6/1/2012 12:10 PM, Srinivas Ramakrishna wrote: > > ... > >> q2: all counter of the BinaryTreeDictionary and IndexedFreeLists of > perm > >> gen is zero, why? > >> > > The perm gen allocation patterns were sufficiently simple that a linearly > > allocated single slab of space > > was used instead of free lists, with the idea that the "holes" from > freeing > > up of class objects in the perm > > gen would not reclaim much usable space. This is probably worth revising > > with all of the perm gen objects > > that will be constantly created and reclaimed with dynamic languages > > running atop the JVM. Jon may > > recall the details of why it was done that way. Much of this will anyway > > change under the new perm gen > > allocation regime, where the perm gen has been moved out of the "Java > heap" > > into the "native heap", so > > probably not worth worrying about any more. > > I recall that the perm gen used the linear allocation blocks without > adaptive free lists. > I would have thought that would still produce lists of free blocks. > CMSPermGen > does use the compactibleFreeListSpace allocate(). Part of the reason the > linear allocation block is used to keep the needed left to right > allocation order of the > klasses in the perm gen. > ah, i now recall the order thing. Thanks for the reminder! You are right that if the perm gen were collected and swept by CMS this would pick up the freed up holes into the free lists. Thanks for correcting! I suppose the empty free lists there in Bond's snippet then indicate that no concurrent collection of the perm gen has happened in that run since the most recent previous stop-world compacting collection (which would empty the free lists and return it all to a single free block at the end). -- ramki > > Jon > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120601/9f2e9058/attachment-0001.html From ysr1729 at gmail.com Fri Jun 1 18:44:08 2012 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 1 Jun 2012 18:44:08 -0700 Subject: Free space calculation of heap In-Reply-To: References: <4FC90055.9AAE.00F7.0@lombardrisk.com> Message-ID: On Fri, Jun 1, 2012 at 12:10 PM, Srinivas Ramakrishna wrote: > ... > > Free in BTD: 546527717 > Free in IFL: 1221215 > ----------------- > Total Free: 547748932 heap words = (547748932 * 8 /1024) KB = 4279289 > KB > > As reported in PrintHeapAtGC: total 4476928K, used 197511K > From above line, computed free space = (4476928 - 197511) KB = 4279417 KB > > "Lost" free space = (4279417 - 4279289) KB = 128 KB > > OK, given that i arrived at the number you reported as missing, I guess I > understood yr question correctly. > > This is interesting. It's been a great while since I looked at this code, > but I know that CMS uses > part of the old gen space to store certain book-keeping structures > associated > with promotion ("promo info" and certain header word "spooling buffers") > in the old gen heap. It allocates these > buffers out of the old gen eagerly and doesn't return them unless we > completely run out of space. (This policy > should probably be revised, so it lazily returns free buffers.) However, > it was my understanding that these > buffers are reported as part of the used space. May be the book-keeping > for space used doesn't take these > into account, in which case I am surprised. Probably worth poking around > in the code to see what's going on. > I'll try and do that if I get some time, unless someone from the JVM team > gets to it sooner, especially > those who may have poked around in that code recently may be... > > An offline exchange with Jon and a quick browse of the code revealed that the space was in a linear allocation block that sits in the old gen. Not quite sure if or when it's used for allocation. Quite possibly a vestige from earlier experiments that may not have gotten deleted. So may be the result of this discussion is a new bonus of 128 KB in the old gen, if we can somehow neutralize the lab refilling.Given that it's hump change for usually large CMS sizes, probably not worth the effort. -- ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120601/646d188d/attachment.html From dhd at exnet.com Sat Jun 2 00:10:56 2012 From: dhd at exnet.com (Damon Hart-Davis) Date: Sat, 2 Jun 2012 08:10:56 +0100 Subject: Free space calculation of heap In-Reply-To: References: <4FC90055.9AAE.00F7.0@lombardrisk.com> Message-ID: Some of us running 'embedded' might still quite like it, eg as space for an extra thread's stack! B^> Rgds Damon On 2 Jun 2012, at 02:44, Srinivas Ramakrishna wrote: > > > On Fri, Jun 1, 2012 at 12:10 PM, Srinivas Ramakrishna wrote: > ... > > Free in BTD: 546527717 > Free in IFL: 1221215 > ----------------- > Total Free: 547748932 heap words = (547748932 * 8 /1024) KB = 4279289 KB > > As reported in PrintHeapAtGC: total 4476928K, used 197511K > From above line, computed free space = (4476928 - 197511) KB = 4279417 KB > > "Lost" free space = (4279417 - 4279289) KB = 128 KB > > OK, given that i arrived at the number you reported as missing, I guess I understood yr question correctly. > > This is interesting. It's been a great while since I looked at this code, but I know that CMS uses > part of the old gen space to store certain book-keeping structures associated > with promotion ("promo info" and certain header word "spooling buffers") in the old gen heap. It allocates these > buffers out of the old gen eagerly and doesn't return them unless we completely run out of space. (This policy > should probably be revised, so it lazily returns free buffers.) However, it was my understanding that these > buffers are reported as part of the used space. May be the book-keeping for space used doesn't take these > into account, in which case I am surprised. Probably worth poking around in the code to see what's going on. > I'll try and do that if I get some time, unless someone from the JVM team gets to it sooner, especially > those who may have poked around in that code recently may be... > > > An offline exchange with Jon and a quick browse of the code revealed that the space was in a linear allocation > block that sits in the old gen. Not quite sure if or when it's used for allocation. Quite possibly a vestige from earlier > experiments that may not have gotten deleted. So may be the result of this discussion is a new bonus of 128 KB > in the old gen, if we can somehow neutralize the lab refilling.Given that it's hump change for usually large > CMS sizes, probably not worth the effort. > > -- ramki > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From ysr1729 at gmail.com Sat Jun 2 20:38:41 2012 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Sat, 2 Jun 2012 20:38:41 -0700 Subject: Free space calculation of heap In-Reply-To: References: <4FC90055.9AAE.00F7.0@lombardrisk.com> Message-ID: Ah, I see... But is there much use of (i?)cms in the embedded world? I thpught parallel gc's pauses would generally be tolerable for the usually small heaps used there. (But i guess may be the processors are much slower too, so perhaps icms still has a bit of a market there?) If it's worthwhile for the "embedded" area, perhaps have this fixed in "embedded" SE first and push the changes into SE hotspot from there? http://hg.openjdk.java.net/hsx/hotspot-emb/hotspot -- ramki On Sat, Jun 2, 2012 at 12:10 AM, Damon Hart-Davis wrote: > Some of us running 'embedded' might still quite like it, eg as space for > an extra thread's stack! B^> > > Rgds > > Damon > > > On 2 Jun 2012, at 02:44, Srinivas Ramakrishna wrote: > > > > > > > On Fri, Jun 1, 2012 at 12:10 PM, Srinivas Ramakrishna > wrote: > > ... > > > > Free in BTD: 546527717 > > Free in IFL: 1221215 > > ----------------- > > Total Free: 547748932 heap words = (547748932 * 8 /1024) KB = > 4279289 KB > > > > As reported in PrintHeapAtGC: total 4476928K, used 197511K > > From above line, computed free space = (4476928 - 197511) KB = 4279417 KB > > > > "Lost" free space = (4279417 - 4279289) KB = 128 KB > > > > OK, given that i arrived at the number you reported as missing, I guess > I understood yr question correctly. > > > > This is interesting. It's been a great while since I looked at this > code, but I know that CMS uses > > part of the old gen space to store certain book-keeping structures > associated > > with promotion ("promo info" and certain header word "spooling buffers") > in the old gen heap. It allocates these > > buffers out of the old gen eagerly and doesn't return them unless we > completely run out of space. (This policy > > should probably be revised, so it lazily returns free buffers.) However, > it was my understanding that these > > buffers are reported as part of the used space. May be the book-keeping > for space used doesn't take these > > into account, in which case I am surprised. Probably worth poking around > in the code to see what's going on. > > I'll try and do that if I get some time, unless someone from the JVM > team gets to it sooner, especially > > those who may have poked around in that code recently may be... > > > > > > An offline exchange with Jon and a quick browse of the code revealed > that the space was in a linear allocation > > block that sits in the old gen. Not quite sure if or when it's used for > allocation. Quite possibly a vestige from earlier > > experiments that may not have gotten deleted. So may be the result of > this discussion is a new bonus of 128 KB > > in the old gen, if we can somehow neutralize the lab refilling.Given > that it's hump change for usually large > > CMS sizes, probably not worth the effort. > > > > -- ramki > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120602/f9831c51/attachment.html From dhd at exnet.com Sun Jun 3 00:14:09 2012 From: dhd at exnet.com (Damon Hart-Davis) Date: Sun, 3 Jun 2012 08:14:09 +0100 Subject: Free space calculation of heap In-Reply-To: References: <4FC90055.9AAE.00F7.0@lombardrisk.com> Message-ID: Please note that at least in my embedded application there is only one CPU, so parallel GC is not necessarily the right thing to do! Rgds Damon On 3 Jun 2012, at 04:38, Srinivas Ramakrishna wrote: > Ah, I see... But is there much use of (i?)cms in the embedded world? I thpught parallel gc's pauses would > generally be tolerable for the usually small heaps used there. (But i guess may be the > processors are much slower too, so perhaps icms still has a bit of a market there?) > > If it's worthwhile for the "embedded" area, perhaps have this fixed in "embedded" SE first > and push the changes into SE hotspot from there? > > http://hg.openjdk.java.net/hsx/hotspot-emb/hotspot > > -- ramki > > > On Sat, Jun 2, 2012 at 12:10 AM, Damon Hart-Davis wrote: > Some of us running 'embedded' might still quite like it, eg as space for an extra thread's stack! B^> > > Rgds > > Damon > > > On 2 Jun 2012, at 02:44, Srinivas Ramakrishna wrote: > > > > > > > On Fri, Jun 1, 2012 at 12:10 PM, Srinivas Ramakrishna wrote: > > ... > > > > Free in BTD: 546527717 > > Free in IFL: 1221215 > > ----------------- > > Total Free: 547748932 heap words = (547748932 * 8 /1024) KB = 4279289 KB > > > > As reported in PrintHeapAtGC: total 4476928K, used 197511K > > From above line, computed free space = (4476928 - 197511) KB = 4279417 KB > > > > "Lost" free space = (4279417 - 4279289) KB = 128 KB > > > > OK, given that i arrived at the number you reported as missing, I guess I understood yr question correctly. > > > > This is interesting. It's been a great while since I looked at this code, but I know that CMS uses > > part of the old gen space to store certain book-keeping structures associated > > with promotion ("promo info" and certain header word "spooling buffers") in the old gen heap. It allocates these > > buffers out of the old gen eagerly and doesn't return them unless we completely run out of space. (This policy > > should probably be revised, so it lazily returns free buffers.) However, it was my understanding that these > > buffers are reported as part of the used space. May be the book-keeping for space used doesn't take these > > into account, in which case I am surprised. Probably worth poking around in the code to see what's going on. > > I'll try and do that if I get some time, unless someone from the JVM team gets to it sooner, especially > > those who may have poked around in that code recently may be... > > > > > > An offline exchange with Jon and a quick browse of the code revealed that the space was in a linear allocation > > block that sits in the old gen. Not quite sure if or when it's used for allocation. Quite possibly a vestige from earlier > > experiments that may not have gotten deleted. So may be the result of this discussion is a new bonus of 128 KB > > in the old gen, if we can somehow neutralize the lab refilling.Given that it's hump change for usually large > > CMS sizes, probably not worth the effort. > > > > -- ramki > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > From Bond.Chen at lombardrisk.com Sun Jun 3 23:16:30 2012 From: Bond.Chen at lombardrisk.com (Bond Chen) Date: Mon, 04 Jun 2012 07:16:30 +0100 Subject: Free space calculation of heap In-Reply-To: References: <4FC90055.9AAE.00F7.0@lombardrisk.com> Message-ID: <4FCCC33D.9AAE.00F7.0@lombardrisk.com> Hi Srinivas, Thanks for your response, see my comments in line, Regards, Bond >>> Srinivas Ramakrishna 6/2/2012 3:10 AM >>> Hi Bond -- some partial answers and comments inline below:- On Fri, Jun 1, 2012 at 2:48 AM, Bond Chen wrote: > Hi, > > Our application have suffer a bad fragmentation issue at production env, I > need switch on lots of parameter to analysis the gc, but I'm not fully > understand the all of the output. > > By adding parameter -XX:PrintFLSStatistics=2 , The BinaryTreeDictionary > and IndexedFreeLists of old gen and perm gen have been printed out > > q1: the total free space + used space of old gen calculation from gc > output is 128KB less than actual old gen capacity > Can you elaborate by pointing out the specific line(s) at which you are seeing this issue? (For example, are you adding the free space reported in the BTD and the IFL, and the used space of old gen reported in the PrintHeapAtGC output, and comparing that to the Old Gen capacity reported there and finding the difference?) I'll reproduce those numbers below:- Free in BTD: 546527717 Free in IFL: 1221215 ----------------- Total Free: 547748932 heap words = (547748932 * 8 /1024) KB = 4279289 KB As reported in PrintHeapAtGC: total 4476928K, used 197511K >From above line, computed free space = (4476928 - 197511) KB = 4279417 KB "Lost" free space = (4279417 - 4279289) KB = 128 KB OK, given that i arrived at the number you reported as missing, I guess I understood yr question correctly. [BOND] YES, THAT'S HOW I CALC THE FREE SPACE. This is interesting. It's been a great while since I looked at this code, but I know that CMS uses part of the old gen space to store certain book-keeping structures associated with promotion ("promo info" and certain header word "spooling buffers") in the old gen heap. It allocates these buffers out of the old gen eagerly and doesn't return them unless we completely run out of space. (This policy should probably be revised, so it lazily returns free buffers.) However, it was my understanding that these buffers are reported as part of the used space. May be the book-keeping for space used doesn't take these into account, in which case I am surprised. Probably worth poking around in the code to see what's going on. I'll try and do that if I get some time, unless someone from the JVM team gets to it sooner, especially those who may have poked around in that code recently may be... > q2: all counter of the BinaryTreeDictionary and IndexedFreeLists of perm > gen is zero, why? > The perm gen allocation patterns were sufficiently simple that a linearly allocated single slab of space was used instead of free lists, with the idea that the "holes" from freeing up of class objects in the perm gen would not reclaim much usable space. This is probably worth revising with all of the perm gen objects that will be constantly created and reclaimed with dynamic languages running atop the JVM. Jon may recall the details of why it was done that way. Much of this will anyway change under the new perm gen allocation regime, where the perm gen has been moved out of the "Java heap" into the "native heap", so probably not worth worrying about any more. > > q3: what's the meaning of " frag=0.0045", fragmentation ratio? and can > someone provide the formula? > It's a normalized number between 0 and 1, with 1 indicating "maximal fragmentation" and 0 indicating no fragmentation. It's computed as 1 - (Sum(b_1^2)/(Sum(b_i))^2) [BOND]: (Sum(b_1^2)/(Sum(b_i))^2) , THE FIRST ONE IS b_i OR b_1 (one) ? where the sum is over all free blocks i, with block i of size b_i . > > q4: which parameter enables the "output log2 of gc log"? how to interpret > the output > The output lists, for each free list size in the "small size" range (i.e. block sizes through 256 heapwords), the demand-rate for that block size, lists the old, and current rate, then based on that estimates how many blocks it needs to keep for that size. The sweep then makes coalescing decisions based on the estimated demand for the blocks under question. I believe it's produced from PrintFLSCensus or something like that? [BOND]: MY OPTION DIDN'T INCLUDE PrintFLSCensus, SHOULD BE ONE OF MY NEW ADDED OPTIONS THIS TIME PrintFLSStatistics=2 (CHANGE =1 to =2) PrintPromotionFailure PrintHeapAtGC -- ramki > /**output log1 of gc log*/ > > 67.016: [ParNew > Desired survivor size 111411200 bytes, new threshold 2 (max 32) > - age 1: 42577240 bytes, 42577240 total > - age 2: 74417368 bytes, 116994608 total > : 1512523K->153985K(1523200K), 0.2866570 secs] > 1649586K->351496K(6000128K)After GC: > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 546527717 > Max Chunk Size: 546527717 > Number of Blocks: 1 > Av. Block Size: 546527717 > Tree Height: 1 > Statistics for IndexedFreeLists: > -------------------------------- > Total Free Space: 1221215 > Max Chunk Size: 256 > Number of Blocks: 39302 > Av. Block Size: 31 > free=547748932 frag=0.0045 > After GC: > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 0 > Max Chunk Size: 0 > Number of Blocks: 0 > Tree Height: 0 > Statistics for IndexedFreeLists: > -------------------------------- > Total Free Space: 0 > Max Chunk Size: 0 > Number of Blocks: 0 > free=0 frag=0.0000 > , 0.2868167 secs] [Times: user=0.91 sys=0.13, real=0.29 secs] > Heap after GC invocations=6 (full 1): > par new generation total 1523200K, used 153985K [0xfffffd7e5ec10000, > 0xfffffd7ec9010000, 0xfffffd7ec9010000) > eden space 1305600K, 0% used [0xfffffd7e5ec10000, 0xfffffd7e5ec10000, > 0xfffffd7eae710000) > from space 217600K, 70% used [0xfffffd7eae710000, 0xfffffd7eb7d70438, > 0xfffffd7ebbb90000) > to space 217600K, 0% used [0xfffffd7ebbb90000, 0xfffffd7ebbb90000, > 0xfffffd7ec9010000) > concurrent mark-sweep generation total 4476928K, used 197511K > [0xfffffd7ec9010000, 0xfffffd7fda410000, 0xfffffd7fda410000) > concurrent-mark-sweep perm gen total 524288K, used 238680K > [0xfffffd7fda410000, 0xfffffd7ffa410000, 0xfffffd7ffa410000) > } > > > /***/ > > /**output log2 of gc log**/ > 2012-05-31T15:23:18.293+0800: 9.731: [CMS-concurrent-sweep-start] > size[1] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > size[2] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > size[3] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > size[4] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > size[5] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, new_rate: > 0.000000, old_desired: 0, new_desired: 0 > ............................... > size[250] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[251] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[252] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[253] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[254] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[255] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > size[256] : demand: 0, old_rate: 0.000000, current_rate: 0.000000, > new_rate: 0.000000, old_desired: 0, new_desired: 0 > demand: 1, old_rate: 0.000000, current_rate: 0.103071, new_rate: 0.103071, > old_desired: 0, new_desired: 0 > > ***/ > > Many Thanks > Bond > > > This e-mail together with any attachments (the "Message") is confidential > and may contain privileged information. If you are not the intended > recipient (or have received this e-mail in error) please notify the sender > immediately and delete this Message from your system. Any unauthorized > copying, disclosure, distribution or use of this Message is strictly > forbidden. > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > This e-mail together with any attachments (the "Message") is confidential and may contain privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this Message from your system. Any unauthorized copying, disclosure, distribution or use of this Message is strictly forbidden. From ysr1729 at gmail.com Mon Jun 4 00:05:50 2012 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Mon, 4 Jun 2012 00:05:50 -0700 Subject: Free space calculation of heap In-Reply-To: <4FCCC33D.9AAE.00F7.0@lombardrisk.com> References: <4FC90055.9AAE.00F7.0@lombardrisk.com> <4FCCC33D.9AAE.00F7.0@lombardrisk.com> Message-ID: On Sun, Jun 3, 2012 at 11:16 PM, Bond Chen wrote: [BOND] YES, THAT'S HOW I CALC THE FREE SPACE. > Good -- hopefully the subsequent emails cleared up the mystery. 1 - (Sum(b_1^2)/(Sum(b_i))^2) > [BOND]: (Sum(b_1^2)/(Sum(b_i))^2) , THE FIRST ONE IS b_i OR b_1 (one) ? > Right, a typo. It should have been b_i, not b_1. > > [BOND]: MY OPTION DIDN'T INCLUDE PrintFLSCensus, SHOULD BE ONE OF MY NEW > ADDED OPTIONS THIS TIME > PrintFLSStatistics=2 (CHANGE =1 to =2) > PrintPromotionFailure > PrintHeapAtGC > I can check the code and let you know unless you fin out for yourself or someone else answers first. It's been a while since i looked at these options. -- ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120604/90f5f629/attachment.html From ysr1729 at gmail.com Sat Jun 9 01:15:32 2012 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Sat, 9 Jun 2012 01:15:32 -0700 Subject: Free space calculation of heap In-Reply-To: <4FCE2042.9AAE.00F7.0@lombardrisk.com> References: <4FC90055.9AAE.00F7.0@lombardrisk.com> <4FCCC33D.9AAE.00F7.0@lombardrisk.com> <4FCD01A9.9AAE.00F7.0@lombardrisk.com> <4FCE2042.9AAE.00F7.0@lombardrisk.com> Message-ID: Hi Bond -- Sorry for the delay in responding... I hadn't had the time to visit the CMS code to check a few things. I just did and can confirm that what i described earlier is exactly how the computation of fragmentation is done. Recall that the only block sizes that enter into the calculation are those that are free blocks. In that sense, the total size of the free space should have little direct bearing on the fragmentation metric except through coalition and allocation strategies that cause such a trend to be exhibited by the heap. Now, let's think about the two extremal cases here first. The first is at the start of the JVM's life or immediately following a full collection, when all of the free space is in a single large free block. In this case Sum(b^2) == (Sum(b))^2, so fragmemtation = 0. Consider now the other extreme where the entire free space has become maximally fragmented, so that all of the free blocks in the heap are of minimal object size, i.e. say 3 heapwords (in the case of the 64-bit heap). In this case, if the heap is composed of k blocks of size b each, then the fragmentation metric is 1 - (k.b^2)/(k.b)^2 = 1 - 1/k, which tends to 1 as k tends to infinity, i.e. maximal fragmentation. Basically, the metric reflects how fine the granularity of the free blocks is vis-a-vis the available free space. Now I can imagine that the allocation and sweep/coalition heuristics are such that as the free space dwindles, the fragmentation does drop. This can be seen as a good thing in a well-tuned system. The free space lives in the smaller free blocks in the so-called indexed (small size block) lists, and in the so-called (large-size) binary tree dictionary. As we come close to the GC triggering threshold, we find that most of the smaller blocks have been used up. The sweep then starts and frees up the smaller blocks which are by now dead, and based on historical demand, places them back into the free lists that they came from. So the system shifts constantly between the state immedicately following a sweep when the smaller-sized block freelists are fully populated, and there's some free space in the form of a few very large blocks in the binary tree dictionary, and then progressively reaches a state where the smaller blocks are all used up. In the former state, the fragmentation looks higher and in the latter case lower. At least to me that seems like a reasonable explanation of the behaviour you are seeing. Does it make sense to you and others on the list as well, or is my explanation perhaps too simplistic? -- ramki On Tue, Jun 5, 2012 at 12:05 AM, Bond Chen wrote: > Hi Srinivas, > I have grep all free heap words size and frag of old generation, like 2 > line ( free=314669364 frag=0.6797 , free=314669364 frag=0.6797) from the gc > log clip pasted below, draw a line chart, found the fragrate have the > exactly same momentum as free size except the very first time period. > q1, Am I draw the right chart? if true can the counter(frag) can truely > reflect the rate of fragementation? > > > > Regards, > Bond > > > > > /****gc log clip ****/ > : 1378638K->93316K(1523200K), 1.0308235 secs] > 3397083K->2111761K(6000128K)After GC: > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 277079438 > Max Chunk Size: 178032428 > Number of Blocks: 42888 > Av. Block Size: 6460 > Tree Height: 67 > Statistics for IndexedFreeLists: > -------------------------------- > Total Free Space: 37589926 > Max Chunk Size: 256 > Number of Blocks: 6367913 > Av. Block Size: 5 > free=314669364 frag=0.6797 > > After GC: > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 0 > Max Chunk Size: 0 > Number of Blocks: 0 > Tree Height: 0 > Statistics for IndexedFreeLists: > -------------------------------- > Total Free Space: 0 > Max Chunk Size: 0 > Number of Blocks: 0 > free=0 frag=0.0000 > , 1.0336722 secs] [Times: user=3.30 sys=0.02, real=1.03 secs] > Heap after GC invocations=101430 (full 337): > par new generation total 1523200K, used 93316K [0xfffffd7e5ea10000, > 0xfffffd7ec8e10000, 0xfffffd7ec8e10000) > eden space 1305600K, 0% used [0xfffffd7e5ea10000, 0xfffffd7e5ea10000, > 0xfffffd7eae510000) > from space 217600K, 42% used [0xfffffd7eae510000, 0xfffffd7eb4031090, > 0xfffffd7ebb990000) > to space 217600K, 0% used [0xfffffd7ebb990000, 0xfffffd7ebb990000, > 0xfffffd7ec8e10000) > concurrent mark-sweep generation total 4476928K, used 2018445K > [0xfffffd7ec8e10000, 0xfffffd7fda210000, 0xfffffd7fda210000) > concurrent-mark-sweep perm gen total 524288K, used 429440K > [0xfffffd7fda210000, 0xfffffd7ffa210000, 0xfffffd7ffa210000) > } > Total time for which application threads were stopped: 1.0355841 seconds > Total time for which application threads were stopped: 0.0042883 seconds > Total time for which application threads were stopped: 0.0052694 seconds > {Heap before GC invocations=101430 (full 337): > par new generation total 1523200K, used 1398916K [0xfffffd7e5ea10000, > 0xfffffd7ec8e10000, 0xfffffd7ec8e10000) > eden space 1305600K, 100% used [0xfffffd7e5ea10000, 0xfffffd7eae510000, > 0xfffffd7eae510000) > from space 217600K, 42% used [0xfffffd7eae510000, 0xfffffd7eb4031090, > 0xfffffd7ebb990000) > to space 217600K, 0% used [0xfffffd7ebb990000, 0xfffffd7ebb990000, > 0xfffffd7ec8e10000) > concurrent mark-sweep generation total 4476928K, used 2018445K > [0xfffffd7ec8e10000, 0xfffffd7fda210000, 0xfffffd7fda210000) > concurrent-mark-sweep perm gen total 524288K, used 429440K > [0xfffffd7fda210000, 0xfffffd7ffa210000, 0xfffffd7ffa210000) > 2012-06-04T18:33:21.721+0800: 263877.362: [GC Before GC: > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 277079438 > Max Chunk Size: 178032428 > Number of Blocks: 42888 > Av. Block Size: 6460 > Tree Height: 67 > Statistics for IndexedFreeLists: > -------------------------------- > Total Free Space: 37589926 > Max Chunk Size: 256 > Number of Blocks: 6367913 > Av. Block Size: 5 > free=314669364 frag=0.6797 > Before GC: > > Statistics for BinaryTreeDictionary: > ------------------------------------ > Total Free Space: 0 > Max Chunk Size: 0 > Number of Blocks: 0 > Tree Height: 0 > Statistics for IndexedFreeLists: > -------------------------------- > Total Free Space: 0 > Max Chunk Size: 0 > Number of Blocks: 0 > free=0 frag=0.0000 > 263877.364: [ParNew- > > /***gc log clip ***/ > > >>> Srinivas Ramakrishna 6/4/2012 3:05 PM >>> > On Su > > > n, Jun 3, 2012 at 11:16 PM, Bond Chen < > Bond.Chen at lombardrisk.com>wrote: > > > [BOND] YES, THAT'S HOW I CALC THE FREE SPACE. > > > > Good -- hopefully the subsequent emails cleared up the mystery. > > > 1 - (Sum(b_1^2)/(Sum(b_i))^2) > > > [BOND]: (Sum(b_1^2)/(Sum(b_i))^2) , THE FIRST ONE IS b_i OR b_1 (one) ? > > > > Right, a typo. It should have been b_i, not b_1. > > > > > > [BOND]: MY OPTION DIDN'T INCLUDE PrintFLSCensus, SHOULD BE ONE OF MY NEW > > ADDED OPTIONS THIS TIME > > PrintFLSStatistics=2 (CHANGE =1 to =2) > > PrintPromotionFailure > > PrintHeapAtGC > > > > I can check the code and let you know unless you fin out for yourself or > someone else answers first. > It's been a while since i looked at these options. > > -- ramki > > This e-mail together with any attachments (the "Message") is confidential and may contain privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this Message from your system. Any unauthorized copying, disclosure, distribution or use of this Message is strictly forbidden. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120609/dbbc30cd/attachment.html From hotspotgc42 at googlemail.com Mon Jun 11 07:10:08 2012 From: hotspotgc42 at googlemail.com (Office Consotec) Date: Mon, 11 Jun 2012 16:10:08 +0200 Subject: Help, full gc takes forever Message-ID: Hello GC Team, Help! in our production environment we are facing some very extensive GC times: 17sec, 70 80 sec etc. ** Enviroment Java(TM) SE Runtime Environment (build 1.6.0_24-b07), OS: Windows XP x64 Memory: 4 GB, Application: Jboss Application Server (Cluster), Restart: Weekly, Note: No other application installed, just the application server (JBOss, approx ~1000 threads). ** GC Settings -server -Xmx1580m -Xms1580m -Xmn800m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Xss128k -XX:ThreadStackSize=128 -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:MaxGCPauseMillis=50 -Xloggc:%ORCA_LOG_DIR%\gc.log -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=6 -XX:PermSize=96m -XX:MaxPermSize=96m -XX:+ExplicitGCInvokesConcurrent -XX:ParallelGCThreads=6 -XX:+PrintCommandLineFlags -XX:MaxTenuringThreshold=31 For both examples below I noticed, that the "real time" is much bigger than the actual used time (user+sys)! => [Times: user=2.72 sys=0.08, real=17.25 secs] => [Times: user=1.92 sys=0.80, real=80.42 secs] Can this be wait time for some resource? Memory? Since the system has only 4GB memory, I consider paging as a potential problem, but can paging explain 80 secs? Any help is appreciated!!! Thanks! *** 80 Seconds 2012-05-08T23:48:18.232-0600: 283660.832: [Full GC [PSYoungGen: 89592K->0K(627200K)] [ParOldGen: 683926K->238185K(696320K)] 773518K->238185K(1323520K) [PSPermGen: 75601K->75595K(98304K)], 0.4936415 secs] [Times: user=2.00 sys=0.00, real=0.50 secs] -- -- A number of young GC -- 2012-05-09T19:28:32.502-0600: 354473.434: [GC [PSYoungGen: 626836K->89594K(627200K)] 1315609K->781407K(1323520K), 0.0625468 secs] [Times: user=0.42 sys=0.00, real=0.06 secs] 2012-05-09T19:28:32.580-0600: 354473.513: [Full GC [PSYoungGen: 89594K->0K(627200K)] [ParOldGen: 691812K->191497K(696320K)] 781407K->191497K(1323520K) [PSPermGen: 75897K->75600K(98304K)], 80.4223984 secs] [Times: user=1.92 sys=0.80, real=80.42 secs] Total time for which application threads were stopped: 80.5050138 seconds *** 17 Seconds *** 226899.147: [pre compact{Heap before GC invocations=53721 (full 158): PSYoungGen total 716800K, used 713640K [0xb6bf0000, 0xe8bf0000, 0xe8bf0000) eden space 614400K, 100% used [0xb6bf0000,0xdc3f0000,0xdc3f0000) from space 102400K, 96% used [0xe27f0000,0xe88da1f0,0xe8bf0000) to space 102400K, 99% used [0xdc3f0000,0xe27edab0,0xe27f0000) ParOldGen total 798720K, used 798303K [0x85ff0000, 0xb6bf0000, 0xb6bf0000) object space 798720K, 99% used [0x85ff0000,0xb6b87fb0,0xb6bf0000) PSPermGen total 98304K, used 75275K [0x7fff0000, 0x85ff0000, 0x85ff0000) object space 98304K, 76% used [0x7fff0000,0x84972e80,0x85ff0000) , 0.0003759 secs] 2012-01-17T09:52:59.467-0500: 226899.147: [Full GC226899.151: [marking phase226899.151: [par mark, 0.4889255 secs] 226899.640: [reference processing, 0.0052557 secs] 226899.645: [class unloading, 0.0692870 secs] , 0.5635781 secs] 226899.714: [summary phase, 5.0144262 secs] 226904.729: [adjust roots, 0.0440449 secs] 226904.773: [compact perm gen, 0.0871871 secs] 226904.860: [compaction phase226904.860: [drain task setup, 0.0003858 secs] 226904.861: [dense prefix task setup, 0.0000069 secs] 226904.861: [steal task setup, 0.0000004 secs] 226904.861: [par compact, 0.2529356 secs] 226905.114: [deferred updates, 0.0085332 secs] , 0.2621966 secs] 226905.122: [post compact, 11.2590418 secs] [PSYoungGen: 713640K->0K(716800K)] [ParOldGen: 798303K->273778K(798720K)] 1511944K->273778K(1515520K) [PSPermGen: 75275K->75270K(98304K)], 17.2345163 secs] [Times: user=2.72 sys=0.08, real=17.25 secs] Heap after GC invocations=53721 (full 158): PSYoungGen total 716800K, used 0K [0xb6bf0000, 0xe8bf0000, 0xe8bf0000) eden space 614400K, 0% used [0xb6bf0000,0xb6bf0000,0xdc3f0000) from space 102400K, 0% used [0xe27f0000,0xe27f0000,0xe8bf0000) to space 102400K, 0% used [0xdc3f0000,0xdc3f0000,0xe27f0000) ParOldGen total 798720K, used 273778K [0x85ff0000, 0xb6bf0000, 0xb6bf0000) object space 798720K, 34% used [0x85ff0000,0x96b4ca40,0xb6bf0000) PSPermGen total 98304K, used 75270K [0x7fff0000, 0x85ff0000, 0x85ff0000) object space 98304K, 76% used [0x7fff0000,0x849718a8,0x85ff0000) } Total time for which application threads were stopped: 17.9621428 seconds -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120611/bf7e447b/attachment.html From chkwok at digibites.nl Mon Jun 11 08:25:51 2012 From: chkwok at digibites.nl (Chi Ho Kwok) Date: Mon, 11 Jun 2012 17:25:51 +0200 Subject: Help, full gc takes forever In-Reply-To: References: Message-ID: Hi there, It's the swap. When the real time is much higher than user+sys, the CPU is idle while waiting for data. You must reserve some space for the OS, running a 4GB heap on 4GB physical RAM does not work that well as you can see. Try 3GB or 3.5GB, it's better to be safe than sorry with swap. Regards, Chi Ho Kwok On Mon, Jun 11, 2012 at 4:10 PM, Office Consotec wrote: > > Hello GC Team, Help! > > in our production environment we are facing some > very extensive GC times: 17sec, 70 80 sec etc. > > ** Enviroment > Java(TM) SE Runtime Environment (build 1.6.0_24-b07), OS: Windows XP x64 > Memory: 4 GB, Application: Jboss Application Server (Cluster), > Restart: Weekly, Note: No other application installed, just the > application server (JBOss, approx ~1000 threads). > > ** GC Settings > -server -Xmx1580m -Xms1580m -Xmn800m > -Dsun.rmi.dgc.client.gcInterval=3600000 > -Dsun.rmi.dgc.server.gcInterval=3600000 > -Xss128k -XX:ThreadStackSize=128 -XX:+UseParallelGC > -XX:+UseParallelOldGC -XX:MaxGCPauseMillis=50 > -Xloggc:%ORCA_LOG_DIR%\gc.log > -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=6 -XX:PermSize=96m > -XX:MaxPermSize=96m > -XX:+ExplicitGCInvokesConcurrent -XX:ParallelGCThreads=6 > -XX:+PrintCommandLineFlags -XX:MaxTenuringThreshold=31 > > For both examples below I noticed, that the "real time" is much bigger > than the actual used time (user+sys)! > => [Times: user=2.72 sys=0.08, real=17.25 secs] > => [Times: user=1.92 sys=0.80, real=80.42 secs] > > Can this be wait time for some resource? Memory? > Since the system has only 4GB memory, I consider paging as a potential > problem, but > can paging explain 80 secs? Any help is appreciated!!! > > Thanks! > > > *** 80 Seconds > 2012-05-08T23:48:18.232-0600: 283660.832: [Full GC [PSYoungGen: > 89592K->0K(627200K)] [ParOldGen: 683926K->238185K(696320K)] > 773518K->238185K(1323520K) [PSPermGen: 75601K->75595K(98304K)], 0.4936415 > secs] [Times: user=2.00 sys=0.00, real=0.50 secs] > -- > -- A number of young GC > -- > 2012-05-09T19:28:32.502-0600: 354473.434: [GC [PSYoungGen: > 626836K->89594K(627200K)] 1315609K->781407K(1323520K), 0.0625468 secs] > [Times: user=0.42 sys=0.00, real=0.06 secs] > 2012-05-09T19:28:32.580-0600: 354473.513: [Full GC [PSYoungGen: > 89594K->0K(627200K)] [ParOldGen: 691812K->191497K(696320K)] > 781407K->191497K(1323520K) [PSPermGen: 75897K->75600K(98304K)], 80.4223984 > secs] > [Times: user=1.92 sys=0.80, real=80.42 secs] > Total time for which application threads were stopped: 80.5050138 seconds > > > *** 17 Seconds *** > 226899.147: [pre compact{Heap before GC invocations=53721 (full 158): > PSYoungGen total 716800K, used 713640K [0xb6bf0000, 0xe8bf0000, > 0xe8bf0000) > eden space 614400K, 100% used [0xb6bf0000,0xdc3f0000,0xdc3f0000) > from space 102400K, 96% used [0xe27f0000,0xe88da1f0,0xe8bf0000) > to space 102400K, 99% used [0xdc3f0000,0xe27edab0,0xe27f0000) > ParOldGen total 798720K, used 798303K [0x85ff0000, 0xb6bf0000, > 0xb6bf0000) > object space 798720K, 99% used [0x85ff0000,0xb6b87fb0,0xb6bf0000) > PSPermGen total 98304K, used 75275K [0x7fff0000, 0x85ff0000, > 0x85ff0000) > object space 98304K, 76% used [0x7fff0000,0x84972e80,0x85ff0000) > , 0.0003759 secs] > 2012-01-17T09:52:59.467-0500: 226899.147: [Full GC226899.151: [marking > phase226899.151: [par mark, 0.4889255 secs] > 226899.640: [reference processing, 0.0052557 secs] > 226899.645: [class unloading, 0.0692870 secs] > , 0.5635781 secs] > 226899.714: [summary phase, 5.0144262 secs] > 226904.729: [adjust roots, 0.0440449 secs] > 226904.773: [compact perm gen, 0.0871871 secs] > 226904.860: [compaction phase226904.860: [drain task setup, 0.0003858 secs] > 226904.861: [dense prefix task setup, 0.0000069 secs] > 226904.861: [steal task setup, 0.0000004 secs] > 226904.861: [par compact, 0.2529356 secs] > 226905.114: [deferred updates, 0.0085332 secs] > , 0.2621966 secs] > 226905.122: [post compact, 11.2590418 secs] > [PSYoungGen: 713640K->0K(716800K)] [ParOldGen: 798303K->273778K(798720K)] > 1511944K->273778K(1515520K) [PSPermGen: 75275K->75270K(98304K)], 17.2345163 > secs] [Times: user=2.72 sys=0.08, real=17.25 secs] > > Heap after GC invocations=53721 (full 158): > PSYoungGen total 716800K, used 0K [0xb6bf0000, 0xe8bf0000, > 0xe8bf0000) > eden space 614400K, 0% used [0xb6bf0000,0xb6bf0000,0xdc3f0000) > from space 102400K, 0% used [0xe27f0000,0xe27f0000,0xe8bf0000) > to space 102400K, 0% used [0xdc3f0000,0xdc3f0000,0xe27f0000) > ParOldGen total 798720K, used 273778K [0x85ff0000, 0xb6bf0000, > 0xb6bf0000) > object space 798720K, 34% used [0x85ff0000,0x96b4ca40,0xb6bf0000) > PSPermGen total 98304K, used 75270K [0x7fff0000, 0x85ff0000, > 0x85ff0000) > object space 98304K, 76% used [0x7fff0000,0x849718a8,0x85ff0000) > } > Total time for which application threads were stopped: 17.9621428 seconds > > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120611/eb5f0c31/attachment.html From nicolas.richard at atos.net Mon Jun 11 08:36:55 2012 From: nicolas.richard at atos.net (RICHARD Nicolas) Date: Mon, 11 Jun 2012 15:36:55 +0000 Subject: Trace object type in GC Message-ID: <4FD61097.70300@atos.net> Dear hotspot developper, I currently develop applications in Java. I am looking for a way to trace objects freed by the Garbage Collector. For the time being, I've only found options to get statistics about GC (execution time, average amount of freed memory, etc) but no one about the type of object handled by the GC. Is there any option that I could use or is it possible to develop a JVM TI agent to get this information ? Best regards Nicolas RICHARD ________________________________ Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne pouvant ?tre assur?e sur Internet, la responsabilit? du groupe Atos ne pourra ?tre engag?e quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'exp?diteur ne donne aucune garantie ? cet ?gard et sa responsabilit? ne saurait ?tre engag?e pour tout dommage r?sultant d'un virus transmis. This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Atos group liability cannot be triggered for the message content. Although the sender endeavors to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120611/bb8e8561/attachment-0001.html From rednaxelafx at gmail.com Mon Jun 11 20:34:57 2012 From: rednaxelafx at gmail.com (Krystal Mok) Date: Tue, 12 Jun 2012 11:34:57 +0800 Subject: Trace object type in GC In-Reply-To: <4FD61097.70300@atos.net> References: <4FD61097.70300@atos.net> Message-ID: Hi Nicolas, If you're interested in the objects collected on a type-granularity, then the following VM arguments gets close: manageable(bool, PrintClassHistogramBeforeFullGC, false, \ "Print a class histogram before any major stop-world GC") \ \ manageable(bool, PrintClassHistogramAfterFullGC, false, \ "Print a class histogram after any major stop-world GC") \ What they do is basically the same as running jmap -histo before or after a full GC, printing the histogram of objects grouped by class. Comparing the numbers before and after a full GC will give you the idea of what's collected in that full GC. You can turn these flags on by specifying them in the VM command line arguments, e.g. -XX:+PrintClassHistogramBeforeFullGC Or you could change their values while the VM is running, e.g. jinfo -flag +PrintClassHistogramBeforeFullGC (There's no equivalent flag for minor GCs in HotSpot.) If you're looking for information on a object-granularity, then it's something HotSpot VM doesn't support. GC implementations seldom track such information. Hope it helps, - Kris On Mon, Jun 11, 2012 at 11:36 PM, RICHARD Nicolas wrote: > Dear hotspot developper, > I currently develop applications in Java. > > I am looking for a way to trace objects freed by the Garbage Collector. > For the time being, I've only found options to get statistics about GC > (execution time, average amount of freed memory, etc) but no one about the > type of object handled by the GC. > > Is there any option that I could use or is it possible to develop a JVM TI > agent to get this information ? > > Best regards > Nicolas RICHARD > > ------------------------------ > > Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage > exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret > professionnel. Si vous recevez ce message par erreur, merci d'en avertir > imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne > pouvant ?tre assur?e sur Internet, la responsabilit? du groupe Atos ne > pourra ?tre engag?e quant au contenu de ce message. Bien que les meilleurs > efforts soient faits pour maintenir cette transmission exempte de tout > virus, l'exp?diteur ne donne aucune garantie ? cet ?gard et sa > responsabilit? ne saurait ?tre engag?e pour tout dommage r?sultant d'un > virus transmis. > > This e-mail and the documents attached are confidential and intended > solely for the addressee; it may also be privileged. If you receive this > e-mail in error, please notify the sender immediately and destroy it. As > its integrity cannot be secured on the Internet, the Atos group liability > cannot be triggered for the message content. Although the sender endeavors > to maintain a computer virus-free network, the sender does not warrant that > this transmission is virus-free and will not be liable for any damages > resulting from any virus transmitted. > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120612/4687c857/attachment.html From rednaxelafx at gmail.com Mon Jun 11 21:10:43 2012 From: rednaxelafx at gmail.com (Krystal Mok) Date: Tue, 12 Jun 2012 12:10:43 +0800 Subject: Trace object type in GC In-Reply-To: <0181C577-5097-4370-98E5-A7A0BCB58A4D@twitter.com> References: <4FD61097.70300@atos.net> <0181C577-5097-4370-98E5-A7A0BCB58A4D@twitter.com> Message-ID: Hi San, Wow, that's nice. Is there any plans to submit it to hotspot-gc-dev as a patch to OpenJDK? - Kris On Tue, Jun 12, 2012 at 12:06 PM, Sam Pullara wrote: > What we would really like to have is a GC option that prints the top-N > class histogram only for those that are being migrated from the oldest > tenure age into the old generation. I have a prototype of it and it works, > but it would be great if it was in the main code line. > > Sam > > On Jun 11, 2012, at 8:34 PM, Krystal Mok wrote: > > Hi Nicolas, > > If you're interested in the objects collected on a type-granularity, then > the following VM arguments gets close: > > manageable(bool, PrintClassHistogramBeforeFullGC, false, \ > "Print a class histogram before any major stop-world GC") \ > \ > manageable(bool, PrintClassHistogramAfterFullGC, false, \ > "Print a class histogram after any major stop-world GC") \ > > What they do is basically the same as running jmap -histo before or after > a full GC, printing the histogram of objects grouped by class. Comparing > the numbers before and after a full GC will give you the idea of what's > collected in that full GC. > > You can turn these flags on by specifying them in the VM command line > arguments, e.g. -XX:+PrintClassHistogramBeforeFullGC > Or you could change their values while the VM is running, e.g. jinfo > -flag +PrintClassHistogramBeforeFullGC > > (There's no equivalent flag for minor GCs in HotSpot.) > > If you're looking for information on a object-granularity, then it's > something HotSpot VM doesn't support. GC implementations seldom track such > information. > > Hope it helps, > - Kris > > On Mon, Jun 11, 2012 at 11:36 PM, RICHARD Nicolas < > nicolas.richard at atos.net> wrote: > >> Dear hotspot developper, >> I currently develop applications in Java. >> >> I am looking for a way to trace objects freed by the Garbage Collector. >> For the time being, I've only found options to get statistics about GC >> (execution time, average amount of freed memory, etc) but no one about the >> type of object handled by the GC. >> >> Is there any option that I could use or is it possible to develop a JVM >> TI agent to get this information ? >> >> Best regards >> Nicolas RICHARD >> >> ------------------------------ >> >> Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage >> exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret >> professionnel. Si vous recevez ce message par erreur, merci d'en avertir >> imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne >> pouvant ?tre assur?e sur Internet, la responsabilit? du groupe Atos ne >> pourra ?tre engag?e quant au contenu de ce message. Bien que les meilleurs >> efforts soient faits pour maintenir cette transmission exempte de tout >> virus, l'exp?diteur ne donne aucune garantie ? cet ?gard et sa >> responsabilit? ne saurait ?tre engag?e pour tout dommage r?sultant d'un >> virus transmis. >> >> This e-mail and the documents attached are confidential and intended >> solely for the addressee; it may also be privileged. If you receive this >> e-mail in error, please notify the sender immediately and destroy it. As >> its integrity cannot be secured on the Internet, the Atos group liability >> cannot be triggered for the message content. Although the sender endeavors >> to maintain a computer virus-free network, the sender does not warrant that >> this transmission is virus-free and will not be liable for any damages >> resulting from any virus transmitted. >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120612/24655725/attachment.html From kbbryant61 at gmail.com Wed Jun 27 16:05:22 2012 From: kbbryant61 at gmail.com (Kobe Bryant) Date: Wed, 27 Jun 2012 16:05:22 -0700 Subject: TImestamps Message-ID: I am not follow timestamps printed in GC log of Oracle JDK 1.6. Looking at [PSYoungGen: 699458K->22193K(743680K)] 886054K->233354K(3039488K), 0.0437840 secs] [Times: user=0.29 sys=0.01, real=0.04 secs] The user space time is 0.29seconds, while the wall clock time us only 0.04s. How can this be. should not wall clock time ("real") include time spent in user space ("user") and in the operating system ("sys"). thanking you, /kobe -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120627/53f0bfb3/attachment.html From vitalyd at gmail.com Wed Jun 27 16:18:30 2012 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 27 Jun 2012 19:18:30 -0400 Subject: TImestamps In-Reply-To: References: Message-ID: You have multiple gc threads doing young gen collections - the user time is their cumulative time, but since they run in parallel wall time is less. Sent from my phone On Jun 27, 2012 7:06 PM, "Kobe Bryant" wrote: > I am not follow timestamps printed in GC log of Oracle JDK 1.6. > Looking at > > [PSYoungGen: 699458K->22193K(743680K)] 886054K->233354K(3039488K), > 0.0437840 secs] > [Times: user=0.29 sys=0.01, real=0.04 secs] > > The user space time is 0.29seconds, while the wall clock time us only > 0.04s. > How can this be. should not wall clock time ("real") include time spent in > user space > ("user") and in the operating system ("sys"). > > thanking you, > > /kobe > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120627/a6f84b9d/attachment.html From kbbryant61 at gmail.com Thu Jun 28 12:13:00 2012 From: kbbryant61 at gmail.com (Kobe Bryant) Date: Thu, 28 Jun 2012 12:13:00 -0700 Subject: Trying to bump up tenuring threshold - failing Message-ID: I am using oracle jdk 1.6 (latest build) in 64 bit mode. The system enocuntering too much promotions out of young gen; this makes tenuring threshold 1. So I thought i must increase young gen size. I did incrementally from 2GB heap to 3GB heap and with each increment I increase young gen size progressively: export MEMORY_OPTIONS="-Xmx3g -Xms3g" -XX:MaxNewSize=830m -XX:NewSize=830m -XX:SurvivorRatio=6 -XX:PermSize=764m -XX:MaxPermSize=764m" export ADDITIONAL_GC_OPTIONS="-XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime" export GC_OPTIONS="-verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC -Xloggc:log34221.log -XX:+DisableExplicitGC ${ADDITIONAL_GC_OPTIONS}" But tenuring threshold start at 7 and then go to 1: $ grep "new threshold" log34421.log Desired survivor size 108789760 bytes, new threshold 7 (max 15) Desired survivor size 108789760 bytes, new threshold 7 (max 15) Desired survivor size 108789760 bytes, new threshold 6 (max 15) Desired survivor size 108789760 bytes, new threshold 5 (max 15) Desired survivor size 108789760 bytes, new threshold 4 (max 15) Desired survivor size 108789760 bytes, new threshold 3 (max 15) Desired survivor size 108789760 bytes, new threshold 2 (max 15) Desired survivor size 108789760 bytes, new threshold 1 (max 15) Desired survivor size 108789760 bytes, new threshold 1 (max 15) Desired survivor size 108789760 bytes, new threshold 1 (max 15) ... How do I keep tenuring threshold at 3 or 4? Also, in one iteration, the application stop time due to gc is listed as Application time: 78.8894050 seconds This mean that GC threads suspended appliction threads for 78s?? A section of log shown here: {Heap before GC invocations=1878 (full 22): PSYoungGen total 743680K, used 741582K [0x00000007cc200000, 0x0000000800000000, 0x0000000800000000) eden space 637440K, 100% used [0x00000007cc200000,0x00000007f3080000,0x00000007f3080000) from space 106240K, 98% used [0x00000007f3080000,0x00000007f96338c0,0x00000007f9840000) to space 106240K, 0% used [0x00000007f9840000,0x00000007f9840000,0x0000000800000000) PSOldGen total 2295808K, used 2071638K [0x0000000740000000, 0x00000007cc200000, 0x00000007cc200000) object space 2295808K, 90% used [0x0000000740000000,0x00000007be715978,0x00000007cc200000) PSPermGen total 782336K, used 544637K [0x0000000710400000, 0x0000000740000000, 0x0000000740000000) object space 782336K, 69% used [0x0000000710400000,0x00000007317df490,0x0000000740000000) 2012-06-28T14:39:55.640-0400: 131078.967: [GCAdaptiveSizePolicy::compute_survivor_space_size_and_thresh: survived: 78129760 promoted: 13171824 overflow: falseAdaptiveSizeStart: 131079.039 collection: 1878 avg_survived_padded_avg: 158664864.000000 avg_promoted_padded_avg: 30081980.000000 avg_pretenured_padded_avg: 0.000000 tenuring_thresh: 1 target_size: 108789760 Desired survivor size 108789760 bytes, new threshold 1 (max 15) PSAdaptiveSizePolicy::compute_generation_free_space: costs minor_time: 0.001379 major_cost: 0.000657 mutator_cost: 0.997964 throughput_goal: 0.990000 live_space: 2181788672 free_space: 1291321344 old_promo_size: 654835712 old_eden_size: 636485632 desired_promo_size: 654835712 desired_eden_size: 636485632 AdaptiveSizePolicy::survivor space sizes: collection: 1878 (108789760, 108789760) -> (108789760, 108789760) AdaptiveSizeStop: collection: 1878 [PSYoungGen: 741582K->76298K(743680K)] 2813220K->2160800K(3039488K), 0.0723130 secs] [Times: user=0.27 sys=0.00, real=0.07 secs] Heap after GC invocations=1878 (full 22): PSYoungGen total 743680K, used 76298K [0x00000007cc200000, 0x0000000800000000, 0x0000000800000000) eden space 637440K, 0% used [0x00000007cc200000,0x00000007cc200000,0x00000007f3080000) from space 106240K, 71% used [0x00000007f9840000,0x00000007fe2c2a60,0x0000000800000000) to space 106240K, 0% used [0x00000007f3080000,0x00000007f3080000,0x00000007f9840000) PSOldGen total 2295808K, used 2084501K [0x0000000740000000, 0x00000007cc200000, 0x00000007cc200000) object space 2295808K, 90% used [0x0000000740000000,0x00000007bf3a55e8,0x00000007cc200000) PSPermGen total 782336K, used 544637K [0x0000000710400000, 0x0000000740000000, 0x0000000740000000) object space 782336K, 69% used [0x0000000710400000,0x00000007317df490,0x0000000740000000) } Total time for which application threads were stopped: 0.0763960 seconds Application time: 78.8894050 seconds thank, /kobe -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120628/878de624/attachment.html From chunt at salesforce.com Thu Jun 28 13:20:48 2012 From: chunt at salesforce.com (Charlie Hunt) Date: Thu, 28 Jun 2012 13:20:48 -0700 Subject: Trying to bump up tenuring threshold - failing In-Reply-To: References: Message-ID: Add -XX:-UseAdaptiveSizePolicy, and then start tweaking SurvivorRatio and NewSize & MaxNewSize. Default behavior for Parallel GC is to use adaptive sizing which will honor your NewSize & MaxNewSize but ignore your SurvivorRatio. > Application time: 78.8894050 seconds That the amount of time your application has run since the last stop the world event. You might find the step-by-step tuning info (and other stuff) in Java Performance [1] useful. ;-) hths, charlie ... [1]: http://www.amazon.com/Java-Performance-Charlie-Hunt/dp/0137142528 On Jun 28, 2012, at 12:13 PM, Kobe Bryant wrote: > I am using oracle jdk 1.6 (latest build) in 64 bit mode. The system enocuntering > too much promotions out of young gen; this makes tenuring threshold 1. > > So I thought i must increase young gen size. I did incrementally from 2GB heap to 3GB heap > and with each increment I increase young gen size progressively: > > > export MEMORY_OPTIONS="-Xmx3g -Xms3g" -XX:MaxNewSize=830m -XX:NewSize=830m -XX:SurvivorRatio=6 -XX:PermSize=764m -XX:MaxPermSize=764m" > export ADDITIONAL_GC_OPTIONS="-XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime" > export GC_OPTIONS="-verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC -Xloggc:log34221.log -XX:+DisableExplicitGC ${ADDITIONAL_GC_OPTIONS}" > > > But tenuring threshold start at 7 and then go to 1: > $ grep "new threshold" log34421.log > Desired survivor size 108789760 bytes, new threshold 7 (max 15) > Desired survivor size 108789760 bytes, new threshold 7 (max 15) > Desired survivor size 108789760 bytes, new threshold 6 (max 15) > Desired survivor size 108789760 bytes, new threshold 5 (max 15) > Desired survivor size 108789760 bytes, new threshold 4 (max 15) > Desired survivor size 108789760 bytes, new threshold 3 (max 15) > Desired survivor size 108789760 bytes, new threshold 2 (max 15) > Desired survivor size 108789760 bytes, new threshold 1 (max 15) > Desired survivor size 108789760 bytes, new threshold 1 (max 15) > Desired survivor size 108789760 bytes, new threshold 1 (max 15) > ... > > How do I keep tenuring threshold at 3 or 4? > > Also, in one iteration, the application stop time due to gc is listed as > Application time: 78.8894050 seconds > > This mean that GC threads suspended appliction threads for 78s?? > > A section of log shown here: > > {Heap before GC invocations=1878 (full 22): > PSYoungGen total 743680K, used 741582K [0x00000007cc200000, 0x0000000800000000, 0x0000000800000000) > eden space 637440K, 100% used [0x00000007cc200000,0x00000007f3080000,0x00000007f3080000) > from space 106240K, 98% used [0x00000007f3080000,0x00000007f96338c0,0x00000007f9840000) > to space 106240K, 0% used [0x00000007f9840000,0x00000007f9840000,0x0000000800000000) > PSOldGen total 2295808K, used 2071638K [0x0000000740000000, 0x00000007cc200000, 0x00000007cc200000) > object space 2295808K, 90% used [0x0000000740000000,0x00000007be715978,0x00000007cc200000) > PSPermGen total 782336K, used 544637K [0x0000000710400000, 0x0000000740000000, 0x0000000740000000) > object space 782336K, 69% used [0x0000000710400000,0x00000007317df490,0x0000000740000000) > 2012-06-28T14:39:55.640-0400: 131078.967: [GCAdaptiveSizePolicy::compute_survivor_space_size_and_thresh: survived: 78129760 promoted: 13171824 overflow: falseAdaptiveSizeStart: 131079.039 collection: 1878 > avg_survived_padded_avg: 158664864.000000 avg_promoted_padded_avg: 30081980.000000 avg_pretenured_padded_avg: 0.000000 tenuring_thresh: 1 target_size: 108789760 > Desired survivor size 108789760 bytes, new threshold 1 (max 15) > PSAdaptiveSizePolicy::compute_generation_free_space: costs minor_time: 0.001379 major_cost: 0.000657 mutator_cost: 0.997964 throughput_goal: 0.990000 live_space: 2181788672 free_space: 1291321344 old_promo_size: 654835712 old_eden_size: 636485632 desired_promo_size: 654835712 desired_eden_size: 636485632 > AdaptiveSizePolicy::survivor space sizes: collection: 1878 (108789760, 108789760) -> (108789760, 108789760) > AdaptiveSizeStop: collection: 1878 > [PSYoungGen: 741582K->76298K(743680K)] 2813220K->2160800K(3039488K), 0.0723130 secs] [Times: user=0.27 sys=0.00, real=0.07 secs] > Heap after GC invocations=1878 (full 22): > PSYoungGen total 743680K, used 76298K [0x00000007cc200000, 0x0000000800000000, 0x0000000800000000) > eden space 637440K, 0% used [0x00000007cc200000,0x00000007cc200000,0x00000007f3080000) > from space 106240K, 71% used [0x00000007f9840000,0x00000007fe2c2a60,0x0000000800000000) > to space 106240K, 0% used [0x00000007f3080000,0x00000007f3080000,0x00000007f9840000) > PSOldGen total 2295808K, used 2084501K [0x0000000740000000, 0x00000007cc200000, 0x00000007cc200000) > object space 2295808K, 90% used [0x0000000740000000,0x00000007bf3a55e8,0x00000007cc200000) > PSPermGen total 782336K, used 544637K [0x0000000710400000, 0x0000000740000000, 0x0000000740000000) > object space 782336K, 69% used [0x0000000710400000,0x00000007317df490,0x0000000740000000) > } > Total time for which application threads were stopped: 0.0763960 seconds > Application time: 78.8894050 seconds > > thank, > > /kobe > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use