From tony.printezis at sun.com Thu Apr 3 14:22:00 2008 From: tony.printezis at sun.com (Tony Printezis) Date: Thu, 03 Apr 2008 17:22:00 -0400 Subject: Proposed changes to the GC logging flags Message-ID: <47F54A78.1020303@sun.com> We're proposing a few small changes (and on significant one) to the way the GC logging flags work: a) When +PrintGCDetails is set, +PrintGCTimeStamps should be set by default (currently, it has to be set explicitly; what we've found is that it's always more useful to have the time stamps in a GC log when analyzing it) b) When -XXloggc: is set, +PrintGCDetails and +PrintGCTimeStamps are set automatically (instead of -verbosegc and +PrintGCTimeStamps) (again, for us, it's always more useful to have the more detailed +PrintGCDetails output in the log when analyzing it) c) When -verbosegc, +PrintGCDetails, or -Xloggc: are set, +TraceClassUnloading is _not_ automatically set, as it is now (it polutes the output a bit during Full GCs) Notice that, for the above, you'll still be able to get the old behavior by setting / unsetting some flags; we're just trying to set more sensible defaults. The more significant proposal: d) Eliminate the -verbosegc format in favor of the (more detailed and more useful, though less concise) +PrintGCDetails format. Do people still heavily use and rely on the -verbosegc format in a way that migration to the +PrintGCDetails format will be difficult? Your thoughts and feedback on the above will be appreciated. Thank you, Tony, HS GC Group -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From michael.finocchiaro at gmail.com Fri Apr 4 00:46:17 2008 From: michael.finocchiaro at gmail.com (Michael Finocchiaro) Date: Fri, 4 Apr 2008 09:46:17 +0200 Subject: Proposed changes to the GC logging flags In-Reply-To: <47F54A78.1020303@sun.com> References: <47F54A78.1020303@sun.com> Message-ID: <8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com> That is good idea to make PGCTS the default with PGCD and I also agree that the TraceClassUnloading is annoying in a gc log. I would propose that you look at HP's JVM GC log format as something far easier to parse than the current format. http://docs.hp.com/en/5992-1918/ch01s25.html or perhaps the jstatd format. After about 5 or 6 years I can almost eyeball the current file format but a numeric based, space or comma-separated list of the GC cause and the memory space allocations would be a lot easier for post-mortem diagnosis and tuning. And, at least in the HP case, the overhead is very minimal (you get almost the same amount of detail as from -XX:+PrintTenuringDistribution in a single line)...and since there still is not a robust and easy-to-use equivalent of HPjmeter (disclosure - I worked for 10 years for HP) for analyzing the JavaSoft HotSpot JVMs, this would be an improvement. My 2 cents, Fino On Thu, Apr 3, 2008 at 11:22 PM, Tony Printezis wrote: > We're proposing a few small changes (and on significant one) to the way > the GC logging flags work: > > a) When +PrintGCDetails is set, +PrintGCTimeStamps should be set by > default (currently, it has to be set explicitly; what we've found is > that it's always more useful to have the time stamps in a GC log when > analyzing it) > b) When -XXloggc: is set, +PrintGCDetails and +PrintGCTimeStamps are set > automatically (instead of -verbosegc and +PrintGCTimeStamps) (again, for > us, it's always more useful to have the more detailed +PrintGCDetails > output in the log when analyzing it) > c) When -verbosegc, +PrintGCDetails, or -Xloggc: are set, > +TraceClassUnloading is _not_ automatically set, as it is now (it > polutes the output a bit during Full GCs) > > Notice that, for the above, you'll still be able to get the old behavior > by setting / unsetting some flags; we're just trying to set more > sensible defaults. > > The more significant proposal: > > d) Eliminate the -verbosegc format in favor of the (more detailed and > more useful, though less concise) +PrintGCDetails format. Do people > still heavily use and rely on the -verbosegc format in a way that > migration to the +PrintGCDetails format will be difficult? > > Your thoughts and feedback on the above will be appreciated. Thank you, > > Tony, HS GC Group > > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | > | | MS BUR02-311 | > | e-mail: tony.printezis at sun.com | 35 Network Drive | > | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- Michael Finocchiaro michael.finocchiaro at gmail.com Mobile Telephone: +33 6 67 90 64 39 MSN: le_fino at hotmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080404/46c8267e/attachment.html From jamesnichols3 at gmail.com Fri Apr 4 05:27:25 2008 From: jamesnichols3 at gmail.com (James Nichols) Date: Fri, 4 Apr 2008 08:27:25 -0400 Subject: Proposed changes to the GC logging flags In-Reply-To: <8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com> References: <47F54A78.1020303@sun.com> <8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com> Message-ID: <83a51e120804040527g1eb5aa94t49997fab39878701@mail.gmail.com> I'd rather not have the format of the log changed, but simplifying the arguments would be great. Isn't that the only change being proposed? Doesn't everyone have a Perl script that they use to pull out the stuff from the log that's relevant and put it in gnuplot or Excel or something? Jim On Fri, Apr 4, 2008 at 3:46 AM, Michael Finocchiaro < michael.finocchiaro at gmail.com> wrote: > That is good idea to make PGCTS the default with PGCD and I also agree > that the TraceClassUnloading is annoying in a gc log. > > I would propose that you look at HP's JVM GC log format as something far > easier to parse than the current format. > > http://docs.hp.com/en/5992-1918/ch01s25.html > > or perhaps the jstatd format. > > After about 5 or 6 years I can almost eyeball the current file format but > a numeric based, space or comma-separated list of the GC cause and the > memory space allocations would be a lot easier for post-mortem diagnosis and > tuning. And, at least in the HP case, the overhead is very minimal (you get > almost the same amount of detail as from -XX:+PrintTenuringDistribution in a > single line)...and since there still is not a robust and easy-to-use > equivalent of HPjmeter (disclosure - I worked for 10 years for HP) for > analyzing the JavaSoft HotSpot JVMs, this would be an improvement. > > My 2 cents, > Fino > > > On Thu, Apr 3, 2008 at 11:22 PM, Tony Printezis > wrote: > > > We're proposing a few small changes (and on significant one) to the way > > the GC logging flags work: > > > > a) When +PrintGCDetails is set, +PrintGCTimeStamps should be set by > > default (currently, it has to be set explicitly; what we've found is > > that it's always more useful to have the time stamps in a GC log when > > analyzing it) > > b) When -XXloggc: is set, +PrintGCDetails and +PrintGCTimeStamps are set > > automatically (instead of -verbosegc and +PrintGCTimeStamps) (again, for > > us, it's always more useful to have the more detailed +PrintGCDetails > > output in the log when analyzing it) > > c) When -verbosegc, +PrintGCDetails, or -Xloggc: are set, > > +TraceClassUnloading is _not_ automatically set, as it is now (it > > polutes the output a bit during Full GCs) > > > > Notice that, for the above, you'll still be able to get the old behavior > > by setting / unsetting some flags; we're just trying to set more > > sensible defaults. > > > > The more significant proposal: > > > > d) Eliminate the -verbosegc format in favor of the (more detailed and > > more useful, though less concise) +PrintGCDetails format. Do people > > still heavily use and rely on the -verbosegc format in a way that > > migration to the +PrintGCDetails format will be difficult? > > > > Your thoughts and feedback on the above will be appreciated. Thank you, > > > > Tony, HS GC Group > > > > -- > > ---------------------------------------------------------------------- > > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | > > | | MS BUR02-311 | > > | e-mail: tony.printezis at sun.com | 35 Network Drive | > > | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | > > ---------------------------------------------------------------------- > > e-mail client: Thunderbird (Solaris) > > > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > -- > Michael Finocchiaro > michael.finocchiaro at gmail.com > Mobile Telephone: +33 6 67 90 64 39 > MSN: le_fino at hotmail.com > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080404/b8d22a09/attachment.html From Keith.Holdaway at sas.com Fri Apr 4 04:02:19 2008 From: Keith.Holdaway at sas.com (Keith Holdaway) Date: Fri, 4 Apr 2008 07:02:19 -0400 Subject: RMI Activity Threads Lock GC o/p Message-ID: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> We are running into issues where ostensibly the memory management appears OK; less than 1% of the tome is in GC - when I put this file into HPJmeter 3.1; 0.000: [ParNew 47626K->6985K(1883840K), 0.5939434 secs] 0.613: [Full GC 6985K->6940K(1883840K), 0.7510576 secs] 288.169: [ParNew 399516K->20399K(1883840K), 3.0827681 secs] 844.451: [ParNew 412975K->26162K(1883840K), 0.3202843 secs] 1491.991: [ParNew 418738K->31914K(1883840K), 0.2347086 secs] 2177.292: [ParNew 424490K->37760K(1883840K), 0.3079626 secs] 2855.229: [ParNew 430336K->43595K(1883840K), 2.0764301 secs] 3575.979: [ParNew 436171K->49438K(1883840K), 0.2961466 secs] 3606.470: [ParNew 69808K->49730K(1883840K), 0.0388510 secs] 3606.511: [Full GC 49730K->42771K(1883840K), 2.5417084 secs] 4292.023: [ParNew 435347K->48662K(1883840K), 0.2445446 secs] 4970.650: [ParNew 441238K->54506K(1883840K), 0.2373110 secs] 5677.603: [ParNew 447082K->60349K(1883840K), 0.3322904 secs] 6367.994: [ParNew 452925K->66188K(1883840K), 0.2645763 secs] 7055.852: [ParNew 458764K->72033K(1883840K), 0.8281927 secs] 7210.009: [ParNew 167469K->73442K(1883840K), 0.0969525 secs] 7210.109: [Full GC 73442K->41123K(1883840K), 2.1642088 secs] 7909.604: [ParNew 433699K->47011K(1883840K), 0.2533163 secs] 8603.519: [ParNew 439587K->52863K(1883840K), 0.2230794 secs] 9289.216: [ParNew 445439K->58709K(1883840K), 0.2359698 secs] 9968.793: [ParNew 451285K->64554K(1883840K), 0.2656911 secs] 10649.694: [ParNew 457130K->70393K(1883840K), 0.2243246 secs] 10813.028: [ParNew 158599K->71696K(1883840K), 0.0770400 secs] 10813.107: [Full GC 71696K->41024K(1883840K), 1.7410828 secs] 11503.339: [ParNew 433600K->46907K(1883840K), 0.2542805 secs] 12191.022: [ParNew 439483K->52751K(1883840K), 0.2257059 secs] 12864.793: [ParNew 445327K->58591K(1883840K), 0.2231573 secs] 13546.217: [ParNew 451167K->64433K(1883840K), 0.2532376 secs] 14247.570: [ParNew 457009K->70278K(1883840K), 0.2111731 secs] 14415.581: [ParNew 168788K->71740K(1883840K), 0.0916532 secs] 14415.675: [Full GC 71740K->41182K(1883840K), 1.7439608 secs] 15096.989: [ParNew 433758K->47062K(1883840K), 0.2752132 secs] 15777.472: [ParNew 439638K->52905K(1883840K), 0.2132059 secs] 16475.184: [ParNew 445481K->58750K(1883840K), 0.2249407 secs] 16956.572: [ParNew 451326K->66543K(1883840K), 0.2237252 secs] 17593.401: [ParNew 459119K->72857K(1883840K), 0.2493865 secs] 18018.152: [ParNew 313587K->76412K(1883840K), 0.1719212 secs] 18018.326: [Full GC 76412K->44673K(1883840K), 1.9000112 secs] 18734.462: [ParNew 437249K->50542K(1883840K), 0.2459797 secs] 19434.180: [ParNew 443118K->56364K(1883840K), 0.2399764 secs] 20026.580: [ParNew 448940K->63103K(1883840K), 0.2327731 secs] 20723.692: [ParNew 455679K->68869K(1883840K), 0.2299928 secs] 21338.875: [ParNew 461445K->74742K(1883840K), 0.2005874 secs] 21620.952: [ParNew 269312K->78103K(1883840K), 0.1174351 secs] 21621.072: [Full GC 78103K->45998K(1883840K), 1.8386129 secs] 22227.195: [ParNew 438574K->51330K(1883840K), 0.2042002 secs] 22696.526: [ParNew 443906K->58015K(1883840K), 0.2154086 secs] 23246.252: [ParNew 450591K->63639K(1883840K), 0.2171688 secs] 23936.816: [ParNew 456215K->69353K(1883840K), 0.2421265 secs] 24529.163: [ParNew 461929K->75718K(1883840K), 0.1985638 secs] 25062.082: [ParNew 468294K->82472K(1883840K), 0.2119384 secs] 25223.640: [ParNew 205230K->84729K(1883840K), 0.0745738 secs] 25223.717: [Full GC 84729K->52981K(1883840K), 1.9445841 secs] 25808.453: [ParNew 445557K->58730K(1883840K), 0.2220857 secs] 27012.025: [ParNew 450888K->65873K(1883840K), 0.1835305 secs] 28826.400: [ParNew 194359K->68617K(1883840K), 0.0476450 secs] 28826.450: [Full GC 68617K->33933K(1883840K), 1.3288466 secs] 31626.367: [ParNew 426509K->39131K(1883840K), 0.1329507 secs] 32428.552: [ParNew 79650K->40294K(1883840K), 0.0451805 secs] 32428.600: [Full GC 40294K->29329K(1883840K), 1.0458070 secs] 36030.356: [ParNew 157110K->31764K(1883840K), 0.1066607 secs] 36030.465: [Full GC 31764K->28476K(1883840K), 0.9791810 secs] 39632.163: [ParNew 96572K->30448K(1883840K), 0.0852053 secs] 39632.251: [Full GC 30448K->27232K(1883840K), 0.9056725 secs] 43233.856: [ParNew 215673K->31439K(1883840K), 0.2064516 secs] 43234.074: [Full GC 31439K->28437K(1883840K), 1.1075595 secs] 46835.908: [ParNew 302993K->39167K(1883840K), 0.1579830 secs] 46836.074: [Full GC 39167K->35187K(1883840K), 1.1977157 secs] 50437.975: [ParNew 233401K->40095K(1883840K), 0.1419100 secs] 50438.130: [Full GC 40095K->36165K(1883840K), 1.3757682 secs] 54040.209: [ParNew 47288K->36927K(1883840K), 2.4154908 secs] 54042.656: [Full GC 36927K->35142K(1883840K), 1.7857094 secs] 57645.546: [ParNew 48404K->36028K(1883840K), 0.9233543 secs] 57646.503: [Full GC 36028K->33941K(1883840K), 1.2575880 secs] 61248.475: [ParNew 62613K->36158K(1883840K), 1.5358356 secs] 61250.042: [Full GC 36158K->34806K(1883840K), 1.1270633 secs] 64852.138: [ParNew 89705K->37904K(1883840K), 2.8467706 secs] 64855.019: [Full GC 37904K->36625K(1883840K), 1.2928314 secs] Here are our VM args: -server -Xms1840m -Xmx1840m -Xss256k -XX:+UseConcMarkSweepGC -XX:NewSize=384m -XX:MaxNewSize=384m -XX:PermSize=256m -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.headless.awt=true -Xloggc:gc.log We see the DGC working every hour - 3600 seconds apart a ParNew followed by a Full GC - and there is a plethora of class unloading of the Sun reflection classes since we do a lot of RMI - serialisation/deserialisation. Should we increase the frequency of DGC? Not sure why the VM hangs - possibly our client code - but we wanted to exclude completely the idea that GC is culpable of creating this or contributing to this failure. thanks keith From Jon.Masamitsu at Sun.COM Fri Apr 4 07:51:24 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 04 Apr 2008 07:51:24 -0700 Subject: Proposed changes to the GC logging flags In-Reply-To: <83a51e120804040527g1eb5aa94t49997fab39878701@mail.gmail.com> References: <47F54A78.1020303@sun.com> <8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com> <83a51e120804040527g1eb5aa94t49997fab39878701@mail.gmail.com> Message-ID: <47F6406C.3020203@sun.com> James Nichols wrote On 04/04/08 05:27,: > I'd rather not have the format of the log changed, but simplifying the > arguments would be great. Isn't that the only change being proposed? At the end of the mail there was a suggestion to use the format for PrintGCDetails whenever -verbosegc was turned on. Currently the format for -verbosegc output is a shorter form (same as the PrintGC output). By my count we've only seen responses asking that it not change. > > Doesn't everyone have a Perl script that they use to pull out the > stuff from the log that's relevant and put it in gnuplot or Excel or > something? We are trying to open source a tool that we call GChisto that takes the PrintGCDetails (also PrintGC) output and plots some of it. Lots of paperwork. We're not there yet. > > Jim > > > > On Fri, Apr 4, 2008 at 3:46 AM, Michael Finocchiaro > > > wrote: > > That is good idea to make PGCTS the default with PGCD and I also > agree that the TraceClassUnloading is annoying in a gc log. > > I would propose that you look at HP's JVM GC log format as > something far easier to parse than the current format. > > http://docs.hp.com/en/5992-1918/ch01s25.html > > or perhaps the jstatd format. > > After about 5 or 6 years I can almost eyeball the current file > format but a numeric based, space or comma-separated list of the > GC cause and the memory space allocations would be a lot easier > for post-mortem diagnosis and tuning. And, at least in the HP > case, the overhead is very minimal (you get almost the same amount > of detail as from -XX:+PrintTenuringDistribution in a single > line)...and since there still is not a robust and easy-to-use > equivalent of HPjmeter (disclosure - I worked for 10 years for HP) > for analyzing the JavaSoft HotSpot JVMs, this would be an improvement. > > My 2 cents, > Fino > > > On Thu, Apr 3, 2008 at 11:22 PM, Tony Printezis > > wrote: > > We're proposing a few small changes (and on significant one) > to the way > the GC logging flags work: > > a) When +PrintGCDetails is set, +PrintGCTimeStamps should be > set by > default (currently, it has to be set explicitly; what we've > found is > that it's always more useful to have the time stamps in a GC > log when > analyzing it) > b) When -XXloggc: is set, +PrintGCDetails and > +PrintGCTimeStamps are set > automatically (instead of -verbosegc and +PrintGCTimeStamps) > (again, for > us, it's always more useful to have the more detailed > +PrintGCDetails > output in the log when analyzing it) > c) When -verbosegc, +PrintGCDetails, or -Xloggc: are set, > +TraceClassUnloading is _not_ automatically set, as it is now (it > polutes the output a bit during Full GCs) > > Notice that, for the above, you'll still be able to get the > old behavior > by setting / unsetting some flags; we're just trying to set more > sensible defaults. > > The more significant proposal: > > d) Eliminate the -verbosegc format in favor of the (more > detailed and > more useful, though less concise) +PrintGCDetails format. Do > people > still heavily use and rely on the -verbosegc format in a way that > migration to the +PrintGCDetails format will be difficult? > > Your thoughts and feedback on the above will be appreciated. > Thank you, > > Tony, HS GC Group > > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. > | > | | MS BUR02-311 > | > | e-mail: tony.printezis at sun.com > | 35 Network Drive > | > | office: +1 781 442 0998 (x20998) | Burlington, > MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > -- > Michael Finocchiaro > michael.finocchiaro at gmail.com > Mobile Telephone: +33 6 67 90 64 39 > MSN: le_fino at hotmail.com > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > >------------------------------------------------------------------------ > >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > From Paul.Hohensee at Sun.COM Fri Apr 4 08:16:52 2008 From: Paul.Hohensee at Sun.COM (Paul Hohensee) Date: Fri, 04 Apr 2008 11:16:52 -0400 Subject: Proposed changes to the GC logging flags In-Reply-To: <47F6406C.3020203@sun.com> References: <47F54A78.1020303@sun.com> <8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com> <83a51e120804040527g1eb5aa94t49997fab39878701@mail.gmail.com> <47F6406C.3020203@sun.com> Message-ID: <47F64664.4070206@sun.com> An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080404/81e1de36/attachment.html From Jon.Masamitsu at Sun.COM Fri Apr 4 09:19:43 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 04 Apr 2008 09:19:43 -0700 Subject: Proposed changes to the GC logging flags In-Reply-To: <47F64664.4070206@sun.com> References: <47F54A78.1020303@sun.com> <8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com> <83a51e120804040527g1eb5aa94t49997fab39878701@mail.gmail.com> <47F6406C.3020203@sun.com> <47F64664.4070206@sun.com> Message-ID: <47F6551F.1070800@sun.com> Paul Hohensee wrote On 04/04/08 08:16,: > How about we change the -verbose:gc dump format but have some kind of > compatibility mode, e.g., -verbose:gc:old? Are you suggesting a change in default behavior? From Jon.Masamitsu at Sun.COM Fri Apr 4 09:39:22 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 04 Apr 2008 09:39:22 -0700 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> Message-ID: <47F659BA.5050309@sun.com> Keith, I'm not understanding what the problem is. Is there some JVM in your distributed systems that is running out of memory? Or some JVM (other than the one that produced this log) doing lots of GC? Jon Keith Holdaway wrote On 04/04/08 04:02,: >We are running into issues where ostensibly the memory management appears OK; less than 1% of the tome is in GC - when I put this file into HPJmeter 3.1; > >0.000: [ParNew 47626K->6985K(1883840K), 0.5939434 secs] >0.613: [Full GC 6985K->6940K(1883840K), 0.7510576 secs] >288.169: [ParNew 399516K->20399K(1883840K), 3.0827681 secs] >844.451: [ParNew 412975K->26162K(1883840K), 0.3202843 secs] >1491.991: [ParNew 418738K->31914K(1883840K), 0.2347086 secs] >2177.292: [ParNew 424490K->37760K(1883840K), 0.3079626 secs] >2855.229: [ParNew 430336K->43595K(1883840K), 2.0764301 secs] >3575.979: [ParNew 436171K->49438K(1883840K), 0.2961466 secs] >3606.470: [ParNew 69808K->49730K(1883840K), 0.0388510 secs] >3606.511: [Full GC 49730K->42771K(1883840K), 2.5417084 secs] >4292.023: [ParNew 435347K->48662K(1883840K), 0.2445446 secs] >4970.650: [ParNew 441238K->54506K(1883840K), 0.2373110 secs] >5677.603: [ParNew 447082K->60349K(1883840K), 0.3322904 secs] >6367.994: [ParNew 452925K->66188K(1883840K), 0.2645763 secs] >7055.852: [ParNew 458764K->72033K(1883840K), 0.8281927 secs] >7210.009: [ParNew 167469K->73442K(1883840K), 0.0969525 secs] >7210.109: [Full GC 73442K->41123K(1883840K), 2.1642088 secs] >7909.604: [ParNew 433699K->47011K(1883840K), 0.2533163 secs] >8603.519: [ParNew 439587K->52863K(1883840K), 0.2230794 secs] >9289.216: [ParNew 445439K->58709K(1883840K), 0.2359698 secs] >9968.793: [ParNew 451285K->64554K(1883840K), 0.2656911 secs] >10649.694: [ParNew 457130K->70393K(1883840K), 0.2243246 secs] >10813.028: [ParNew 158599K->71696K(1883840K), 0.0770400 secs] >10813.107: [Full GC 71696K->41024K(1883840K), 1.7410828 secs] >11503.339: [ParNew 433600K->46907K(1883840K), 0.2542805 secs] >12191.022: [ParNew 439483K->52751K(1883840K), 0.2257059 secs] >12864.793: [ParNew 445327K->58591K(1883840K), 0.2231573 secs] >13546.217: [ParNew 451167K->64433K(1883840K), 0.2532376 secs] >14247.570: [ParNew 457009K->70278K(1883840K), 0.2111731 secs] >14415.581: [ParNew 168788K->71740K(1883840K), 0.0916532 secs] >14415.675: [Full GC 71740K->41182K(1883840K), 1.7439608 secs] >15096.989: [ParNew 433758K->47062K(1883840K), 0.2752132 secs] >15777.472: [ParNew 439638K->52905K(1883840K), 0.2132059 secs] >16475.184: [ParNew 445481K->58750K(1883840K), 0.2249407 secs] >16956.572: [ParNew 451326K->66543K(1883840K), 0.2237252 secs] >17593.401: [ParNew 459119K->72857K(1883840K), 0.2493865 secs] >18018.152: [ParNew 313587K->76412K(1883840K), 0.1719212 secs] >18018.326: [Full GC 76412K->44673K(1883840K), 1.9000112 secs] >18734.462: [ParNew 437249K->50542K(1883840K), 0.2459797 secs] >19434.180: [ParNew 443118K->56364K(1883840K), 0.2399764 secs] >20026.580: [ParNew 448940K->63103K(1883840K), 0.2327731 secs] >20723.692: [ParNew 455679K->68869K(1883840K), 0.2299928 secs] >21338.875: [ParNew 461445K->74742K(1883840K), 0.2005874 secs] >21620.952: [ParNew 269312K->78103K(1883840K), 0.1174351 secs] >21621.072: [Full GC 78103K->45998K(1883840K), 1.8386129 secs] >22227.195: [ParNew 438574K->51330K(1883840K), 0.2042002 secs] >22696.526: [ParNew 443906K->58015K(1883840K), 0.2154086 secs] >23246.252: [ParNew 450591K->63639K(1883840K), 0.2171688 secs] >23936.816: [ParNew 456215K->69353K(1883840K), 0.2421265 secs] >24529.163: [ParNew 461929K->75718K(1883840K), 0.1985638 secs] >25062.082: [ParNew 468294K->82472K(1883840K), 0.2119384 secs] >25223.640: [ParNew 205230K->84729K(1883840K), 0.0745738 secs] >25223.717: [Full GC 84729K->52981K(1883840K), 1.9445841 secs] >25808.453: [ParNew 445557K->58730K(1883840K), 0.2220857 secs] >27012.025: [ParNew 450888K->65873K(1883840K), 0.1835305 secs] >28826.400: [ParNew 194359K->68617K(1883840K), 0.0476450 secs] >28826.450: [Full GC 68617K->33933K(1883840K), 1.3288466 secs] >31626.367: [ParNew 426509K->39131K(1883840K), 0.1329507 secs] >32428.552: [ParNew 79650K->40294K(1883840K), 0.0451805 secs] >32428.600: [Full GC 40294K->29329K(1883840K), 1.0458070 secs] >36030.356: [ParNew 157110K->31764K(1883840K), 0.1066607 secs] >36030.465: [Full GC 31764K->28476K(1883840K), 0.9791810 secs] >39632.163: [ParNew 96572K->30448K(1883840K), 0.0852053 secs] >39632.251: [Full GC 30448K->27232K(1883840K), 0.9056725 secs] >43233.856: [ParNew 215673K->31439K(1883840K), 0.2064516 secs] >43234.074: [Full GC 31439K->28437K(1883840K), 1.1075595 secs] >46835.908: [ParNew 302993K->39167K(1883840K), 0.1579830 secs] >46836.074: [Full GC 39167K->35187K(1883840K), 1.1977157 secs] >50437.975: [ParNew 233401K->40095K(1883840K), 0.1419100 secs] >50438.130: [Full GC 40095K->36165K(1883840K), 1.3757682 secs] >54040.209: [ParNew 47288K->36927K(1883840K), 2.4154908 secs] >54042.656: [Full GC 36927K->35142K(1883840K), 1.7857094 secs] >57645.546: [ParNew 48404K->36028K(1883840K), 0.9233543 secs] >57646.503: [Full GC 36028K->33941K(1883840K), 1.2575880 secs] >61248.475: [ParNew 62613K->36158K(1883840K), 1.5358356 secs] >61250.042: [Full GC 36158K->34806K(1883840K), 1.1270633 secs] >64852.138: [ParNew 89705K->37904K(1883840K), 2.8467706 secs] >64855.019: [Full GC 37904K->36625K(1883840K), 1.2928314 secs] > >Here are our VM args: > >-server -Xms1840m -Xmx1840m -Xss256k -XX:+UseConcMarkSweepGC -XX:NewSize=384m -XX:MaxNewSize=384m -XX:PermSize=256m -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 >-Djava.headless.awt=true -Xloggc:gc.log > >We see the DGC working every hour - 3600 seconds apart a ParNew followed by a Full GC - and there is a plethora of class unloading of the Sun reflection classes since we do a lot of RMI - serialisation/deserialisation. > >Should we increase the frequency of DGC? Not sure why the VM hangs - possibly our client code - but we wanted to exclude completely the idea that GC is culpable of creating this or contributing to this failure. > >thanks > >keith > > >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > From Y.S.Ramakrishna at Sun.COM Fri Apr 4 09:42:10 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 04 Apr 2008 09:42:10 -0700 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> Message-ID: Hi Keith -- See inline below:- > We are running into issues where ostensibly the memory management > appears OK; less than 1% of the tome is in GC - when I put this file > into HPJmeter 3.1; > > 0.000: [ParNew 47626K->6985K(1883840K), 0.5939434 secs] > 0.613: [Full GC 6985K->6940K(1883840K), 0.7510576 secs] > 288.169: [ParNew 399516K->20399K(1883840K), 3.0827681 secs] > 844.451: [ParNew 412975K->26162K(1883840K), 0.3202843 secs] > 1491.991: [ParNew 418738K->31914K(1883840K), 0.2347086 secs] > 2177.292: [ParNew 424490K->37760K(1883840K), 0.3079626 secs] > 2855.229: [ParNew 430336K->43595K(1883840K), 2.0764301 secs] > 3575.979: [ParNew 436171K->49438K(1883840K), 0.2961466 secs] > 3606.470: [ParNew 69808K->49730K(1883840K), 0.0388510 secs] > 3606.511: [Full GC 49730K->42771K(1883840K), 2.5417084 secs] > 4292.023: [ParNew 435347K->48662K(1883840K), 0.2445446 secs] > 4970.650: [ParNew 441238K->54506K(1883840K), 0.2373110 secs] > 5677.603: [ParNew 447082K->60349K(1883840K), 0.3322904 secs] > 6367.994: [ParNew 452925K->66188K(1883840K), 0.2645763 secs] > 7055.852: [ParNew 458764K->72033K(1883840K), 0.8281927 secs] > 7210.009: [ParNew 167469K->73442K(1883840K), 0.0969525 secs] > 7210.109: [Full GC 73442K->41123K(1883840K), 2.1642088 secs] > 7909.604: [ParNew 433699K->47011K(1883840K), 0.2533163 secs] > 8603.519: [ParNew 439587K->52863K(1883840K), 0.2230794 secs] > 9289.216: [ParNew 445439K->58709K(1883840K), 0.2359698 secs] > 9968.793: [ParNew 451285K->64554K(1883840K), 0.2656911 secs] > 10649.694: [ParNew 457130K->70393K(1883840K), 0.2243246 secs] > 10813.028: [ParNew 158599K->71696K(1883840K), 0.0770400 secs] > 10813.107: [Full GC 71696K->41024K(1883840K), 1.7410828 secs] > 11503.339: [ParNew 433600K->46907K(1883840K), 0.2542805 secs] > 12191.022: [ParNew 439483K->52751K(1883840K), 0.2257059 secs] > 12864.793: [ParNew 445327K->58591K(1883840K), 0.2231573 secs] > 13546.217: [ParNew 451167K->64433K(1883840K), 0.2532376 secs] > 14247.570: [ParNew 457009K->70278K(1883840K), 0.2111731 secs] > 14415.581: [ParNew 168788K->71740K(1883840K), 0.0916532 secs] > 14415.675: [Full GC 71740K->41182K(1883840K), 1.7439608 secs] > 15096.989: [ParNew 433758K->47062K(1883840K), 0.2752132 secs] > 15777.472: [ParNew 439638K->52905K(1883840K), 0.2132059 secs] > 16475.184: [ParNew 445481K->58750K(1883840K), 0.2249407 secs] > 16956.572: [ParNew 451326K->66543K(1883840K), 0.2237252 secs] > 17593.401: [ParNew 459119K->72857K(1883840K), 0.2493865 secs] > 18018.152: [ParNew 313587K->76412K(1883840K), 0.1719212 secs] > 18018.326: [Full GC 76412K->44673K(1883840K), 1.9000112 secs] > 18734.462: [ParNew 437249K->50542K(1883840K), 0.2459797 secs] > 19434.180: [ParNew 443118K->56364K(1883840K), 0.2399764 secs] > 20026.580: [ParNew 448940K->63103K(1883840K), 0.2327731 secs] > 20723.692: [ParNew 455679K->68869K(1883840K), 0.2299928 secs] > 21338.875: [ParNew 461445K->74742K(1883840K), 0.2005874 secs] > 21620.952: [ParNew 269312K->78103K(1883840K), 0.1174351 secs] > 21621.072: [Full GC 78103K->45998K(1883840K), 1.8386129 secs] > 22227.195: [ParNew 438574K->51330K(1883840K), 0.2042002 secs] > 22696.526: [ParNew 443906K->58015K(1883840K), 0.2154086 secs] > 23246.252: [ParNew 450591K->63639K(1883840K), 0.2171688 secs] > 23936.816: [ParNew 456215K->69353K(1883840K), 0.2421265 secs] > 24529.163: [ParNew 461929K->75718K(1883840K), 0.1985638 secs] > 25062.082: [ParNew 468294K->82472K(1883840K), 0.2119384 secs] > 25223.640: [ParNew 205230K->84729K(1883840K), 0.0745738 secs] > 25223.717: [Full GC 84729K->52981K(1883840K), 1.9445841 secs] > 25808.453: [ParNew 445557K->58730K(1883840K), 0.2220857 secs] > 27012.025: [ParNew 450888K->65873K(1883840K), 0.1835305 secs] > 28826.400: [ParNew 194359K->68617K(1883840K), 0.0476450 secs] > 28826.450: [Full GC 68617K->33933K(1883840K), 1.3288466 secs] > 31626.367: [ParNew 426509K->39131K(1883840K), 0.1329507 secs] > 32428.552: [ParNew 79650K->40294K(1883840K), 0.0451805 secs] > 32428.600: [Full GC 40294K->29329K(1883840K), 1.0458070 secs] > 36030.356: [ParNew 157110K->31764K(1883840K), 0.1066607 secs] > 36030.465: [Full GC 31764K->28476K(1883840K), 0.9791810 secs] > 39632.163: [ParNew 96572K->30448K(1883840K), 0.0852053 secs] > 39632.251: [Full GC 30448K->27232K(1883840K), 0.9056725 secs] > 43233.856: [ParNew 215673K->31439K(1883840K), 0.2064516 secs] > 43234.074: [Full GC 31439K->28437K(1883840K), 1.1075595 secs] > 46835.908: [ParNew 302993K->39167K(1883840K), 0.1579830 secs] > 46836.074: [Full GC 39167K->35187K(1883840K), 1.1977157 secs] > 50437.975: [ParNew 233401K->40095K(1883840K), 0.1419100 secs] > 50438.130: [Full GC 40095K->36165K(1883840K), 1.3757682 secs] > 54040.209: [ParNew 47288K->36927K(1883840K), 2.4154908 secs] > 54042.656: [Full GC 36927K->35142K(1883840K), 1.7857094 secs] > 57645.546: [ParNew 48404K->36028K(1883840K), 0.9233543 secs] > 57646.503: [Full GC 36028K->33941K(1883840K), 1.2575880 secs] > 61248.475: [ParNew 62613K->36158K(1883840K), 1.5358356 secs] > 61250.042: [Full GC 36158K->34806K(1883840K), 1.1270633 secs] > 64852.138: [ParNew 89705K->37904K(1883840K), 2.8467706 secs] > 64855.019: [Full GC 37904K->36625K(1883840K), 1.2928314 secs] > Did you notice that towards the end of the log above, your allocation rates have plummetted and the scavenges themselves are taking pretty long? Perhaps that gives you some ideas as to what could be happening? > Here are our VM args: > > -server -Xms1840m -Xmx1840m -Xss256k -XX:+UseConcMarkSweepGC > -XX:NewSize=384m -XX:MaxNewSize=384m -XX:PermSize=256m > -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 > -Djava.headless.awt=true -Xloggc:gc.log I'd suggest experimenting with either:- -XX:+ExplicitGCInvokesConcurrent[AndUnloadsClasses] -XX:+CMSClassUnloadingEnabled or, perhaps less desirable, but certainly useful from the prespective of your debugging objectives here:- -XX:+DisableExplicitGC -XX:+CMSClassUnloadingEnabled > > We see the DGC working every hour - 3600 seconds apart a ParNew > followed by a Full GC - and there is a plethora of class unloading of > the Sun reflection classes since we do a lot of RMI - serialisation/deserialisation. > > Should we increase the frequency of DGC? Not sure why the VM hangs - > possibly our client code - but we wanted to exclude completely the > idea that GC is culpable of creating this or contributing to this failure. Check that you are not paging and running slow rather than hanging? When you get the "hung jvm", if on Solaris, try prstat -L -p to see if any threads are active, and also try pstack (perhaps several seconds apart, to observe any active threads). If the application shows no activity (from above), try jstack (or kill -QUIT ) to see if you can elicit a java thread stack dump. (I was not sure from yr description whether you believed the JVM was hung or that the jvm was responding -- for example doing the occasional gc etc -- but the application response had plummeted.) -- ramki > > thanks > > keith > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From jamesnichols3 at gmail.com Fri Apr 4 09:46:56 2008 From: jamesnichols3 at gmail.com (jamesnichols3 at gmail.com) Date: Fri, 4 Apr 2008 16:46:56 +0000 Subject: Proposed changes to the GC logging flags In-Reply-To: <47F65A95.5030505@sun.com> References: <47F54A78.1020303@sun.com><8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com><83a51e120804040527g1eb5aa94t49997fab39878701@mail.gmail.com><47F65A95.5030505@sun.com> Message-ID: <1584721556-1207327612-cardhu_decombobulator_blackberry.rim.net-471298625-@bxe004.bisx.prod.on.blackberry> That would be sweet! Jim Sent from my Verizon Wireless BlackBerry -----Original Message----- From: Tony Printezis Date: Fri, 04 Apr 2008 12:43:01 To:James Nichols Cc:Michael Finocchiaro ,hotspot-gc-use at openjdk.java.net Subject: Re: Proposed changes to the GC logging flags James Nichols wrote: > I'd rather not have the format of the log changed, but simplifying the > arguments would be great. Isn't that the only change being proposed? We're asking feedback on both. > Doesn't everyone have a Perl script that they use to pull out the > stuff from the log that's relevant and put it in gnuplot or Excel or > something? If we can provide you with a Java library that parses whatever format we come up with and allows you to access the data from Java, in a sense shielding you from GC log format changes, would that be a reasonable compromise? Tony > On Fri, Apr 4, 2008 at 3:46 AM, Michael Finocchiaro > > > wrote: > > That is good idea to make PGCTS the default with PGCD and I also > agree that the TraceClassUnloading is annoying in a gc log. > > I would propose that you look at HP's JVM GC log format as > something far easier to parse than the current format. > > http://docs.hp.com/en/5992-1918/ch01s25.html > > or perhaps the jstatd format. > > After about 5 or 6 years I can almost eyeball the current file > format but a numeric based, space or comma-separated list of the > GC cause and the memory space allocations would be a lot easier > for post-mortem diagnosis and tuning. And, at least in the HP > case, the overhead is very minimal (you get almost the same amount > of detail as from -XX:+PrintTenuringDistribution in a single > line)...and since there still is not a robust and easy-to-use > equivalent of HPjmeter (disclosure - I worked for 10 years for HP) > for analyzing the JavaSoft HotSpot JVMs, this would be an improvement. > > My 2 cents, > Fino > > > On Thu, Apr 3, 2008 at 11:22 PM, Tony Printezis > > wrote: > > We're proposing a few small changes (and on significant one) > to the way > the GC logging flags work: > > a) When +PrintGCDetails is set, +PrintGCTimeStamps should be > set by > default (currently, it has to be set explicitly; what we've > found is > that it's always more useful to have the time stamps in a GC > log when > analyzing it) > b) When -XXloggc: is set, +PrintGCDetails and > +PrintGCTimeStamps are set > automatically (instead of -verbosegc and +PrintGCTimeStamps) > (again, for > us, it's always more useful to have the more detailed > +PrintGCDetails > output in the log when analyzing it) > c) When -verbosegc, +PrintGCDetails, or -Xloggc: are set, > +TraceClassUnloading is _not_ automatically set, as it is now (it > polutes the output a bit during Full GCs) > > Notice that, for the above, you'll still be able to get the > old behavior > by setting / unsetting some flags; we're just trying to set more > sensible defaults. > > The more significant proposal: > > d) Eliminate the -verbosegc format in favor of the (more > detailed and > more useful, though less concise) +PrintGCDetails format. Do > people > still heavily use and rely on the -verbosegc format in a way that > migration to the +PrintGCDetails format will be difficult? > > Your thoughts and feedback on the above will be appreciated. > Thank you, > > Tony, HS GC Group > > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. > | > | | MS BUR02-311 > | > | e-mail: tony.printezis at sun.com > | 35 Network Drive > | > | office: +1 781 442 0998 (x20998) | Burlington, > MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > -- > Michael Finocchiaro > michael.finocchiaro at gmail.com > Mobile Telephone: +33 6 67 90 64 39 > MSN: le_fino at hotmail.com > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From tony.printezis at sun.com Fri Apr 4 09:40:08 2008 From: tony.printezis at sun.com (Tony Printezis) Date: Fri, 04 Apr 2008 12:40:08 -0400 Subject: Proposed changes to the GC logging flags In-Reply-To: <8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com> References: <47F54A78.1020303@sun.com> <8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com> Message-ID: <47F659E8.9000405@sun.com> Hi, Michael Finocchiaro wrote: > That is good idea to make PGCTS the default with PGCD and I also agree > that the TraceClassUnloading is annoying in a gc log. Cool, thanks. > I would propose that you look at HP's JVM GC log format as something > far easier to parse than the current format. So, the issue of what GC log format to actually use is kind of orthogonal to what we're proposing here. Basically, we're asking: do people want two formats (one: the concise, but very unhelpful on most cases, -verbosegc format and second: the more useful and elaborate +PGCD format) or would most people be happy if we merge those two? FWIW, yes, we are considering changing the +PGCD format so that it is more easily parseable. Tony > http://docs.hp.com/en/5992-1918/ch01s25.html > > or perhaps the jstatd format. > > After about 5 or 6 years I can almost eyeball the current file format > but a numeric based, space or comma-separated list of the GC cause and > the memory space allocations would be a lot easier for post-mortem > diagnosis and tuning. And, at least in the HP case, the overhead is > very minimal (you get almost the same amount of detail as from > -XX:+PrintTenuringDistribution in a single line)...and since there > still is not a robust and easy-to-use equivalent of HPjmeter > (disclosure - I worked for 10 years for HP) for analyzing the JavaSoft > HotSpot JVMs, this would be an improvement. > > My 2 cents, > Fino > > On Thu, Apr 3, 2008 at 11:22 PM, Tony Printezis > > wrote: > > We're proposing a few small changes (and on significant one) to > the way > the GC logging flags work: > > a) When +PrintGCDetails is set, +PrintGCTimeStamps should be set by > default (currently, it has to be set explicitly; what we've found is > that it's always more useful to have the time stamps in a GC log when > analyzing it) > b) When -XXloggc: is set, +PrintGCDetails and +PrintGCTimeStamps > are set > automatically (instead of -verbosegc and +PrintGCTimeStamps) > (again, for > us, it's always more useful to have the more detailed +PrintGCDetails > output in the log when analyzing it) > c) When -verbosegc, +PrintGCDetails, or -Xloggc: are set, > +TraceClassUnloading is _not_ automatically set, as it is now (it > polutes the output a bit during Full GCs) > > Notice that, for the above, you'll still be able to get the old > behavior > by setting / unsetting some flags; we're just trying to set more > sensible defaults. > > The more significant proposal: > > d) Eliminate the -verbosegc format in favor of the (more detailed and > more useful, though less concise) +PrintGCDetails format. Do people > still heavily use and rely on the -verbosegc format in a way that > migration to the +PrintGCDetails format will be difficult? > > Your thoughts and feedback on the above will be appreciated. Thank > you, > > Tony, HS GC Group > > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | > | | MS BUR02-311 | > | e-mail: tony.printezis at sun.com > | 35 Network Drive | > | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > -- > Michael Finocchiaro > michael.finocchiaro at gmail.com > Mobile Telephone: +33 6 67 90 64 39 > MSN: le_fino at hotmail.com -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From tony.printezis at sun.com Fri Apr 4 09:43:01 2008 From: tony.printezis at sun.com (Tony Printezis) Date: Fri, 04 Apr 2008 12:43:01 -0400 Subject: Proposed changes to the GC logging flags In-Reply-To: <83a51e120804040527g1eb5aa94t49997fab39878701@mail.gmail.com> References: <47F54A78.1020303@sun.com> <8b61e5430804040046h7cbc85a1o738aafbff08696e3@mail.gmail.com> <83a51e120804040527g1eb5aa94t49997fab39878701@mail.gmail.com> Message-ID: <47F65A95.5030505@sun.com> James Nichols wrote: > I'd rather not have the format of the log changed, but simplifying the > arguments would be great. Isn't that the only change being proposed? We're asking feedback on both. > Doesn't everyone have a Perl script that they use to pull out the > stuff from the log that's relevant and put it in gnuplot or Excel or > something? If we can provide you with a Java library that parses whatever format we come up with and allows you to access the data from Java, in a sense shielding you from GC log format changes, would that be a reasonable compromise? Tony > On Fri, Apr 4, 2008 at 3:46 AM, Michael Finocchiaro > > > wrote: > > That is good idea to make PGCTS the default with PGCD and I also > agree that the TraceClassUnloading is annoying in a gc log. > > I would propose that you look at HP's JVM GC log format as > something far easier to parse than the current format. > > http://docs.hp.com/en/5992-1918/ch01s25.html > > or perhaps the jstatd format. > > After about 5 or 6 years I can almost eyeball the current file > format but a numeric based, space or comma-separated list of the > GC cause and the memory space allocations would be a lot easier > for post-mortem diagnosis and tuning. And, at least in the HP > case, the overhead is very minimal (you get almost the same amount > of detail as from -XX:+PrintTenuringDistribution in a single > line)...and since there still is not a robust and easy-to-use > equivalent of HPjmeter (disclosure - I worked for 10 years for HP) > for analyzing the JavaSoft HotSpot JVMs, this would be an improvement. > > My 2 cents, > Fino > > > On Thu, Apr 3, 2008 at 11:22 PM, Tony Printezis > > wrote: > > We're proposing a few small changes (and on significant one) > to the way > the GC logging flags work: > > a) When +PrintGCDetails is set, +PrintGCTimeStamps should be > set by > default (currently, it has to be set explicitly; what we've > found is > that it's always more useful to have the time stamps in a GC > log when > analyzing it) > b) When -XXloggc: is set, +PrintGCDetails and > +PrintGCTimeStamps are set > automatically (instead of -verbosegc and +PrintGCTimeStamps) > (again, for > us, it's always more useful to have the more detailed > +PrintGCDetails > output in the log when analyzing it) > c) When -verbosegc, +PrintGCDetails, or -Xloggc: are set, > +TraceClassUnloading is _not_ automatically set, as it is now (it > polutes the output a bit during Full GCs) > > Notice that, for the above, you'll still be able to get the > old behavior > by setting / unsetting some flags; we're just trying to set more > sensible defaults. > > The more significant proposal: > > d) Eliminate the -verbosegc format in favor of the (more > detailed and > more useful, though less concise) +PrintGCDetails format. Do > people > still heavily use and rely on the -verbosegc format in a way that > migration to the +PrintGCDetails format will be difficult? > > Your thoughts and feedback on the above will be appreciated. > Thank you, > > Tony, HS GC Group > > -- > ---------------------------------------------------------------------- > | Tony Printezis, Staff Engineer | Sun Microsystems Inc. > | > | | MS BUR02-311 > | > | e-mail: tony.printezis at sun.com > | 35 Network Drive > | > | office: +1 781 442 0998 (x20998) | Burlington, > MA01803-0902, USA | > ---------------------------------------------------------------------- > e-mail client: Thunderbird (Solaris) > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > -- > Michael Finocchiaro > michael.finocchiaro at gmail.com > Mobile Telephone: +33 6 67 90 64 39 > MSN: le_fino at hotmail.com > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -- ---------------------------------------------------------------------- | Tony Printezis, Staff Engineer | Sun Microsystems Inc. | | | MS BUR02-311 | | e-mail: tony.printezis at sun.com | 35 Network Drive | | office: +1 781 442 0998 (x20998) | Burlington, MA01803-0902, USA | ---------------------------------------------------------------------- e-mail client: Thunderbird (Solaris) From adamh at basis.com Fri Apr 4 10:40:18 2008 From: adamh at basis.com (Adam Hawthorne) Date: Fri, 4 Apr 2008 13:40:18 -0400 Subject: Proposed changes to the GC logging flags In-Reply-To: <47F54A78.1020303@sun.com> References: <47F54A78.1020303@sun.com> Message-ID: <200804041340.18288.adamh@basis.com> We always require customers to enable the -XX:+PrintGCDetails and PrintGCTimeStamps... making -verbose:gc do this by default would be a welcome change for us. The -verbose:gc output is not helpful for us. Thanks, Adam On Thu April 3 2008, Tony Printezis wrote: > We're proposing a few small changes (and on significant one) to the way > the GC logging flags work: > > a) When +PrintGCDetails is set, +PrintGCTimeStamps should be set by > default (currently, it has to be set explicitly; what we've found is > that it's always more useful to have the time stamps in a GC log when > analyzing it) > b) When -XXloggc: is set, +PrintGCDetails and +PrintGCTimeStamps are set > automatically (instead of -verbosegc and +PrintGCTimeStamps) (again, for > us, it's always more useful to have the more detailed +PrintGCDetails > output in the log when analyzing it) > c) When -verbosegc, +PrintGCDetails, or -Xloggc: are set, > +TraceClassUnloading is _not_ automatically set, as it is now (it > polutes the output a bit during Full GCs) > > Notice that, for the above, you'll still be able to get the old behavior > by setting / unsetting some flags; we're just trying to set more > sensible defaults. > > The more significant proposal: > > d) Eliminate the -verbosegc format in favor of the (more detailed and > more useful, though less concise) +PrintGCDetails format. Do people > still heavily use and rely on the -verbosegc format in a way that > migration to the +PrintGCDetails format will be difficult? > > Your thoughts and feedback on the above will be appreciated. Thank you, > > Tony, HS GC Group > -- Adam Hawthorne Software Engineer BASIS International Ltd. www.basis.com +1.505.345.5232 Phone -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part. Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080404/cdcc58fe/attachment.bin From jamesnichols3 at gmail.com Fri Apr 4 10:59:18 2008 From: jamesnichols3 at gmail.com (jamesnichols3 at gmail.com) Date: Fri, 4 Apr 2008 17:59:18 +0000 Subject: Proposed changes to the GC logging flags In-Reply-To: <200804041340.18288.adamh@basis.com> References: <47F54A78.1020303@sun.com><200804041340.18288.adamh@basis.com> Message-ID: <856414486-1207331960-cardhu_decombobulator_blackberry.rim.net-600871317-@bxe004.bisx.prod.on.blackberry> I echo these sentiments. It would make it a lot easier to get operations/IT types enable this. I don't know what it is, but they get nervous when I ask to enable a bunch of arguments. Jim Sent from my Verizon Wireless BlackBerry -----Original Message----- From: Adam Hawthorne Date: Fri, 4 Apr 2008 13:40:18 To:hotspot-gc-use at openjdk.java.net Subject: Re: Proposed changes to the GC logging flags We always require customers to enable the -XX:+PrintGCDetails and PrintGCTimeStamps... making -verbose:gc do this by default would be a welcome change for us. The -verbose:gc output is not helpful for us. Thanks, Adam On Thu April 3 2008, Tony Printezis wrote: > We're proposing a few small changes (and on significant one) to the way > the GC logging flags work: > > a) When +PrintGCDetails is set, +PrintGCTimeStamps should be set by > default (currently, it has to be set explicitly; what we've found is > that it's always more useful to have the time stamps in a GC log when > analyzing it) > b) When -XXloggc: is set, +PrintGCDetails and +PrintGCTimeStamps are set > automatically (instead of -verbosegc and +PrintGCTimeStamps) (again, for > us, it's always more useful to have the more detailed +PrintGCDetails > output in the log when analyzing it) > c) When -verbosegc, +PrintGCDetails, or -Xloggc: are set, > +TraceClassUnloading is_not_ automatically set, as it is now (it > polutes the output a bit during Full GCs) > > Notice that, for the above, you'll still be able to get the old behavior > by setting / unsetting some flags; we're just trying to set more > sensible defaults. > > The more significant proposal: > > d) Eliminate the -verbosegc format in favor of the (more detailed and > more useful, though less concise) +PrintGCDetails format. Do people > still heavily use and rely on the -verbosegc format in a way that > migration to the +PrintGCDetails format will be difficult? > > Your thoughts and feedback on the above will be appreciated. Thank you, > > Tony, HS GC Group > -- Adam Hawthorne Software Engineer BASIS International Ltd. www.basis.com +1.505.345.5232 Phone _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From michael.finocchiaro at gmail.com Fri Apr 4 11:02:39 2008 From: michael.finocchiaro at gmail.com (Michael Finocchiaro) Date: Fri, 4 Apr 2008 20:02:39 +0200 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> Message-ID: <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> Ramki, Can you explain how -XX:+CMSClassUnloadingEnabled is going to help? I haven't used that parameter before. Thanks, Fino On Fri, Apr 4, 2008 at 6:42 PM, Y Srinivas Ramakrishna < Y.S.Ramakrishna at sun.com> wrote: > > Hi Keith -- > > See inline below:- > > > We are running into issues where ostensibly the memory management > > appears OK; less than 1% of the tome is in GC - when I put this file > > into HPJmeter 3.1; > > > > 0.000: [ParNew 47626K->6985K(1883840K), 0.5939434 secs] > > 0.613: [Full GC 6985K->6940K(1883840K), 0.7510576 secs] > > 288.169: [ParNew 399516K->20399K(1883840K), 3.0827681 secs] > > 844.451: [ParNew 412975K->26162K(1883840K), 0.3202843 secs] > > 1491.991: [ParNew 418738K->31914K(1883840K), 0.2347086 secs] > > 2177.292: [ParNew 424490K->37760K(1883840K), 0.3079626 secs] > > 2855.229: [ParNew 430336K->43595K(1883840K), 2.0764301 secs] > > 3575.979: [ParNew 436171K->49438K(1883840K), 0.2961466 secs] > > 3606.470: [ParNew 69808K->49730K(1883840K), 0.0388510 secs] > > 3606.511: [Full GC 49730K->42771K(1883840K), 2.5417084 secs] > > 4292.023: [ParNew 435347K->48662K(1883840K), 0.2445446 secs] > > 4970.650: [ParNew 441238K->54506K(1883840K), 0.2373110 secs] > > 5677.603: [ParNew 447082K->60349K(1883840K), 0.3322904 secs] > > 6367.994: [ParNew 452925K->66188K(1883840K), 0.2645763 secs] > > 7055.852: [ParNew 458764K->72033K(1883840K), 0.8281927 secs] > > 7210.009: [ParNew 167469K->73442K(1883840K), 0.0969525 secs] > > 7210.109: [Full GC 73442K->41123K(1883840K), 2.1642088 secs] > > 7909.604: [ParNew 433699K->47011K(1883840K), 0.2533163 secs] > > 8603.519: [ParNew 439587K->52863K(1883840K), 0.2230794 secs] > > 9289.216: [ParNew 445439K->58709K(1883840K), 0.2359698 secs] > > 9968.793: [ParNew 451285K->64554K(1883840K), 0.2656911 secs] > > 10649.694: [ParNew 457130K->70393K(1883840K), 0.2243246 secs] > > 10813.028: [ParNew 158599K->71696K(1883840K), 0.0770400 secs] > > 10813.107: [Full GC 71696K->41024K(1883840K), 1.7410828 secs] > > 11503.339: [ParNew 433600K->46907K(1883840K), 0.2542805 secs] > > 12191.022: [ParNew 439483K->52751K(1883840K), 0.2257059 secs] > > 12864.793: [ParNew 445327K->58591K(1883840K), 0.2231573 secs] > > 13546.217: [ParNew 451167K->64433K(1883840K), 0.2532376 secs] > > 14247.570: [ParNew 457009K->70278K(1883840K), 0.2111731 secs] > > 14415.581: [ParNew 168788K->71740K(1883840K), 0.0916532 secs] > > 14415.675: [Full GC 71740K->41182K(1883840K), 1.7439608 secs] > > 15096.989: [ParNew 433758K->47062K(1883840K), 0.2752132 secs] > > 15777.472: [ParNew 439638K->52905K(1883840K), 0.2132059 secs] > > 16475.184: [ParNew 445481K->58750K(1883840K), 0.2249407 secs] > > 16956.572: [ParNew 451326K->66543K(1883840K), 0.2237252 secs] > > 17593.401: [ParNew 459119K->72857K(1883840K), 0.2493865 secs] > > 18018.152: [ParNew 313587K->76412K(1883840K), 0.1719212 secs] > > 18018.326: [Full GC 76412K->44673K(1883840K), 1.9000112 secs] > > 18734.462: [ParNew 437249K->50542K(1883840K), 0.2459797 secs] > > 19434.180: [ParNew 443118K->56364K(1883840K), 0.2399764 secs] > > 20026.580: [ParNew 448940K->63103K(1883840K), 0.2327731 secs] > > 20723.692: [ParNew 455679K->68869K(1883840K), 0.2299928 secs] > > 21338.875: [ParNew 461445K->74742K(1883840K), 0.2005874 secs] > > 21620.952: [ParNew 269312K->78103K(1883840K), 0.1174351 secs] > > 21621.072: [Full GC 78103K->45998K(1883840K), 1.8386129 secs] > > 22227.195: [ParNew 438574K->51330K(1883840K), 0.2042002 secs] > > 22696.526: [ParNew 443906K->58015K(1883840K), 0.2154086 secs] > > 23246.252: [ParNew 450591K->63639K(1883840K), 0.2171688 secs] > > 23936.816: [ParNew 456215K->69353K(1883840K), 0.2421265 secs] > > 24529.163: [ParNew 461929K->75718K(1883840K), 0.1985638 secs] > > 25062.082: [ParNew 468294K->82472K(1883840K), 0.2119384 secs] > > 25223.640: [ParNew 205230K->84729K(1883840K), 0.0745738 secs] > > 25223.717: [Full GC 84729K->52981K(1883840K), 1.9445841 secs] > > 25808.453: [ParNew 445557K->58730K(1883840K), 0.2220857 secs] > > 27012.025: [ParNew 450888K->65873K(1883840K), 0.1835305 secs] > > 28826.400: [ParNew 194359K->68617K(1883840K), 0.0476450 secs] > > 28826.450: [Full GC 68617K->33933K(1883840K), 1.3288466 secs] > > 31626.367: [ParNew 426509K->39131K(1883840K), 0.1329507 secs] > > 32428.552: [ParNew 79650K->40294K(1883840K), 0.0451805 secs] > > 32428.600: [Full GC 40294K->29329K(1883840K), 1.0458070 secs] > > 36030.356: [ParNew 157110K->31764K(1883840K), 0.1066607 secs] > > 36030.465: [Full GC 31764K->28476K(1883840K), 0.9791810 secs] > > 39632.163: [ParNew 96572K->30448K(1883840K), 0.0852053 secs] > > 39632.251: [Full GC 30448K->27232K(1883840K), 0.9056725 secs] > > 43233.856: [ParNew 215673K->31439K(1883840K), 0.2064516 secs] > > 43234.074: [Full GC 31439K->28437K(1883840K), 1.1075595 secs] > > 46835.908: [ParNew 302993K->39167K(1883840K), 0.1579830 secs] > > 46836.074: [Full GC 39167K->35187K(1883840K), 1.1977157 secs] > > 50437.975: [ParNew 233401K->40095K(1883840K), 0.1419100 secs] > > 50438.130: [Full GC 40095K->36165K(1883840K), 1.3757682 secs] > > 54040.209: [ParNew 47288K->36927K(1883840K), 2.4154908 secs] > > 54042.656: [Full GC 36927K->35142K(1883840K), 1.7857094 secs] > > 57645.546: [ParNew 48404K->36028K(1883840K), 0.9233543 secs] > > 57646.503: [Full GC 36028K->33941K(1883840K), 1.2575880 secs] > > 61248.475: [ParNew 62613K->36158K(1883840K), 1.5358356 secs] > > 61250.042: [Full GC 36158K->34806K(1883840K), 1.1270633 secs] > > 64852.138: [ParNew 89705K->37904K(1883840K), 2.8467706 secs] > > 64855.019: [Full GC 37904K->36625K(1883840K), 1.2928314 secs] > > > > Did you notice that towards the end of the log above, your allocation > rates > have plummetted and the scavenges themselves are taking pretty long? > Perhaps that gives you some ideas as to what could be happening? > > > Here are our VM args: > > > > -server -Xms1840m -Xmx1840m -Xss256k -XX:+UseConcMarkSweepGC > > -XX:NewSize=384m -XX:MaxNewSize=384m -XX:PermSize=256m > > -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 > -Dsun.rmi.dgc.server.gcInterval=3600000 > > -Djava.headless.awt=true -Xloggc:gc.log > > I'd suggest experimenting with either:- > > -XX:+ExplicitGCInvokesConcurrent[AndUnloadsClasses] > -XX:+CMSClassUnloadingEnabled > > or, perhaps less desirable, but certainly useful from the prespective of > your > debugging objectives here:- > > -XX:+DisableExplicitGC -XX:+CMSClassUnloadingEnabled > > > > > We see the DGC working every hour - 3600 seconds apart a ParNew > > followed by a Full GC - and there is a plethora of class unloading of > > the Sun reflection classes since we do a lot of RMI - > serialisation/deserialisation. > > > > Should we increase the frequency of DGC? Not sure why the VM hangs - > > possibly our client code - but we wanted to exclude completely the > > idea that GC is culpable of creating this or contributing to this > failure. > > Check that you are not paging and running slow rather than hanging? > > When you get the "hung jvm", if on Solaris, try prstat -L -p to see > if any threads are active, and also try pstack (perhaps several > seconds apart, to observe any active threads). If the application shows > no activity (from above), try jstack (or kill -QUIT ) to > see if you can elicit a java thread stack dump. > > (I was not sure from yr description whether you believed the JVM was > hung or that the jvm was responding -- for example doing the occasional > gc etc -- but the application response had plummeted.) > > -- ramki > > > > > thanks > > > > keith > > > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- Michael Finocchiaro michael.finocchiaro at gmail.com Mobile Telephone: +33 6 67 90 64 39 MSN: le_fino at hotmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080404/9f3f24e9/attachment.html From Y.S.Ramakrishna at Sun.COM Fri Apr 4 11:10:21 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 04 Apr 2008 11:10:21 -0700 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> Message-ID: > Can you explain how -XX:+CMSClassUnloadingEnabled is going to help? I > haven't used that parameter before. The idea is that, assuming concurrent collections happen, classes will be unloaded (and perm gen cleaned) as a result of this flag, and will thus make it unnecessary for a full gc to reclaim that storage. Sometimes, this can have the beneficial effect of also cleaning up a bunch of storage in non-perm heap which had been referenced from objects in the perm gen which were no longer reachable, but which tended to act as "roots" keeping them alive. It's a general prophylactic in this case, rather than specifically targeted at an issue that Keith is seeing (which specific problem, as I indicated, I do not quite fully understand yet from his original email). -- ramki > Thanks, > Fino > > On Fri, Apr 4, 2008 at 6:42 PM, Y Srinivas Ramakrishna < > Y.S.Ramakrishna at sun.com> wrote: > > > > > Hi Keith -- > > > > See inline below:- > > > > > We are running into issues where ostensibly the memory management > > > appears OK; less than 1% of the tome is in GC - when I put this file > > > into HPJmeter 3.1; > > > > > > 0.000: [ParNew 47626K->6985K(1883840K), 0.5939434 secs] > > > 0.613: [Full GC 6985K->6940K(1883840K), 0.7510576 secs] > > > 288.169: [ParNew 399516K->20399K(1883840K), 3.0827681 secs] > > > 844.451: [ParNew 412975K->26162K(1883840K), 0.3202843 secs] > > > 1491.991: [ParNew 418738K->31914K(1883840K), 0.2347086 secs] > > > 2177.292: [ParNew 424490K->37760K(1883840K), 0.3079626 secs] > > > 2855.229: [ParNew 430336K->43595K(1883840K), 2.0764301 secs] > > > 3575.979: [ParNew 436171K->49438K(1883840K), 0.2961466 secs] > > > 3606.470: [ParNew 69808K->49730K(1883840K), 0.0388510 secs] > > > 3606.511: [Full GC 49730K->42771K(1883840K), 2.5417084 secs] > > > 4292.023: [ParNew 435347K->48662K(1883840K), 0.2445446 secs] > > > 4970.650: [ParNew 441238K->54506K(1883840K), 0.2373110 secs] > > > 5677.603: [ParNew 447082K->60349K(1883840K), 0.3322904 secs] > > > 6367.994: [ParNew 452925K->66188K(1883840K), 0.2645763 secs] > > > 7055.852: [ParNew 458764K->72033K(1883840K), 0.8281927 secs] > > > 7210.009: [ParNew 167469K->73442K(1883840K), 0.0969525 secs] > > > 7210.109: [Full GC 73442K->41123K(1883840K), 2.1642088 secs] > > > 7909.604: [ParNew 433699K->47011K(1883840K), 0.2533163 secs] > > > 8603.519: [ParNew 439587K->52863K(1883840K), 0.2230794 secs] > > > 9289.216: [ParNew 445439K->58709K(1883840K), 0.2359698 secs] > > > 9968.793: [ParNew 451285K->64554K(1883840K), 0.2656911 secs] > > > 10649.694: [ParNew 457130K->70393K(1883840K), 0.2243246 secs] > > > 10813.028: [ParNew 158599K->71696K(1883840K), 0.0770400 secs] > > > 10813.107: [Full GC 71696K->41024K(1883840K), 1.7410828 secs] > > > 11503.339: [ParNew 433600K->46907K(1883840K), 0.2542805 secs] > > > 12191.022: [ParNew 439483K->52751K(1883840K), 0.2257059 secs] > > > 12864.793: [ParNew 445327K->58591K(1883840K), 0.2231573 secs] > > > 13546.217: [ParNew 451167K->64433K(1883840K), 0.2532376 secs] > > > 14247.570: [ParNew 457009K->70278K(1883840K), 0.2111731 secs] > > > 14415.581: [ParNew 168788K->71740K(1883840K), 0.0916532 secs] > > > 14415.675: [Full GC 71740K->41182K(1883840K), 1.7439608 secs] > > > 15096.989: [ParNew 433758K->47062K(1883840K), 0.2752132 secs] > > > 15777.472: [ParNew 439638K->52905K(1883840K), 0.2132059 secs] > > > 16475.184: [ParNew 445481K->58750K(1883840K), 0.2249407 secs] > > > 16956.572: [ParNew 451326K->66543K(1883840K), 0.2237252 secs] > > > 17593.401: [ParNew 459119K->72857K(1883840K), 0.2493865 secs] > > > 18018.152: [ParNew 313587K->76412K(1883840K), 0.1719212 secs] > > > 18018.326: [Full GC 76412K->44673K(1883840K), 1.9000112 secs] > > > 18734.462: [ParNew 437249K->50542K(1883840K), 0.2459797 secs] > > > 19434.180: [ParNew 443118K->56364K(1883840K), 0.2399764 secs] > > > 20026.580: [ParNew 448940K->63103K(1883840K), 0.2327731 secs] > > > 20723.692: [ParNew 455679K->68869K(1883840K), 0.2299928 secs] > > > 21338.875: [ParNew 461445K->74742K(1883840K), 0.2005874 secs] > > > 21620.952: [ParNew 269312K->78103K(1883840K), 0.1174351 secs] > > > 21621.072: [Full GC 78103K->45998K(1883840K), 1.8386129 secs] > > > 22227.195: [ParNew 438574K->51330K(1883840K), 0.2042002 secs] > > > 22696.526: [ParNew 443906K->58015K(1883840K), 0.2154086 secs] > > > 23246.252: [ParNew 450591K->63639K(1883840K), 0.2171688 secs] > > > 23936.816: [ParNew 456215K->69353K(1883840K), 0.2421265 secs] > > > 24529.163: [ParNew 461929K->75718K(1883840K), 0.1985638 secs] > > > 25062.082: [ParNew 468294K->82472K(1883840K), 0.2119384 secs] > > > 25223.640: [ParNew 205230K->84729K(1883840K), 0.0745738 secs] > > > 25223.717: [Full GC 84729K->52981K(1883840K), 1.9445841 secs] > > > 25808.453: [ParNew 445557K->58730K(1883840K), 0.2220857 secs] > > > 27012.025: [ParNew 450888K->65873K(1883840K), 0.1835305 secs] > > > 28826.400: [ParNew 194359K->68617K(1883840K), 0.0476450 secs] > > > 28826.450: [Full GC 68617K->33933K(1883840K), 1.3288466 secs] > > > 31626.367: [ParNew 426509K->39131K(1883840K), 0.1329507 secs] > > > 32428.552: [ParNew 79650K->40294K(1883840K), 0.0451805 secs] > > > 32428.600: [Full GC 40294K->29329K(1883840K), 1.0458070 secs] > > > 36030.356: [ParNew 157110K->31764K(1883840K), 0.1066607 secs] > > > 36030.465: [Full GC 31764K->28476K(1883840K), 0.9791810 secs] > > > 39632.163: [ParNew 96572K->30448K(1883840K), 0.0852053 secs] > > > 39632.251: [Full GC 30448K->27232K(1883840K), 0.9056725 secs] > > > 43233.856: [ParNew 215673K->31439K(1883840K), 0.2064516 secs] > > > 43234.074: [Full GC 31439K->28437K(1883840K), 1.1075595 secs] > > > 46835.908: [ParNew 302993K->39167K(1883840K), 0.1579830 secs] > > > 46836.074: [Full GC 39167K->35187K(1883840K), 1.1977157 secs] > > > 50437.975: [ParNew 233401K->40095K(1883840K), 0.1419100 secs] > > > 50438.130: [Full GC 40095K->36165K(1883840K), 1.3757682 secs] > > > 54040.209: [ParNew 47288K->36927K(1883840K), 2.4154908 secs] > > > 54042.656: [Full GC 36927K->35142K(1883840K), 1.7857094 secs] > > > 57645.546: [ParNew 48404K->36028K(1883840K), 0.9233543 secs] > > > 57646.503: [Full GC 36028K->33941K(1883840K), 1.2575880 secs] > > > 61248.475: [ParNew 62613K->36158K(1883840K), 1.5358356 secs] > > > 61250.042: [Full GC 36158K->34806K(1883840K), 1.1270633 secs] > > > 64852.138: [ParNew 89705K->37904K(1883840K), 2.8467706 secs] > > > 64855.019: [Full GC 37904K->36625K(1883840K), 1.2928314 secs] > > > > > > > Did you notice that towards the end of the log above, your allocation > > rates > > have plummetted and the scavenges themselves are taking pretty long? > > Perhaps that gives you some ideas as to what could be happening? > > > > > Here are our VM args: > > > > > > -server -Xms1840m -Xmx1840m -Xss256k -XX:+UseConcMarkSweepGC > > > -XX:NewSize=384m -XX:MaxNewSize=384m -XX:PermSize=256m > > > -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 > > -Dsun.rmi.dgc.server.gcInterval=3600000 > > > -Djava.headless.awt=true -Xloggc:gc.log > > > > I'd suggest experimenting with either:- > > > > -XX:+ExplicitGCInvokesConcurrent[AndUnloadsClasses] > > -XX:+CMSClassUnloadingEnabled > > > > or, perhaps less desirable, but certainly useful from the > prespective of > > your > > debugging objectives here:- > > > > -XX:+DisableExplicitGC -XX:+CMSClassUnloadingEnabled > > > > > > > > We see the DGC working every hour - 3600 seconds apart a ParNew > > > followed by a Full GC - and there is a plethora of class unloading > of > > > the Sun reflection classes since we do a lot of RMI - > > serialisation/deserialisation. > > > > > > Should we increase the frequency of DGC? Not sure why the VM hangs > - > > > possibly our client code - but we wanted to exclude completely the > > > idea that GC is culpable of creating this or contributing to this > > failure. > > > > Check that you are not paging and running slow rather than hanging? > > > > When you get the "hung jvm", if on Solaris, try prstat -L -p > to see > > if any threads are active, and also try pstack (perhaps several > > seconds apart, to observe any active threads). If the application shows > > no activity (from above), try jstack (or kill -QUIT ) to > > see if you can elicit a java thread stack dump. > > > > (I was not sure from yr description whether you believed the JVM was > > hung or that the jvm was responding -- for example doing the occasional > > gc etc -- but the application response had plummeted.) > > > > -- ramki > > > > > > > > thanks > > > > > > keith > > > > > > > > > _______________________________________________ > > > hotspot-gc-use mailing list > > > hotspot-gc-use at openjdk.java.net > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > -- > Michael Finocchiaro > michael.finocchiaro at gmail.com > Mobile Telephone: +33 6 67 90 64 39 > MSN: le_fino at hotmail.com > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From michael.finocchiaro at gmail.com Fri Apr 4 11:42:56 2008 From: michael.finocchiaro at gmail.com (Michael Finocchiaro) Date: Fri, 4 Apr 2008 20:42:56 +0200 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> Message-ID: <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> OK got it. So with ParallelOldGC, do you get the Perm clean-up behavior and heap compaction by default? How can we get heap compaction with CMS? With -XX:+UseCMSCompactAtFullCollection? Would this clean up old RMI references? What does this one do: -XX:+CMSCompactWhenClearAllSoftRefs - would it be less intrusive? Does it play well with -XX:SoftRefLRUPolicyMSPerMB=1? I also asked elsewhere whether there was an equivalent to the AIX environment variable ALLOCATION_THRESHOLD to warn of large allocations coming in and threatening to blow the heap. Thanks, Fino On Fri, Apr 4, 2008 at 8:10 PM, Y Srinivas Ramakrishna < Y.S.Ramakrishna at sun.com> wrote: > > > Can you explain how -XX:+CMSClassUnloadingEnabled is going to help? I > > haven't used that parameter before. > > The idea is that, assuming concurrent collections happen, classes will be > unloaded (and perm gen cleaned) as a result of this flag, and will thus > make it unnecessary for a full gc to reclaim that storage. Sometimes, this > can have the beneficial effect of also cleaning up a bunch of storage in > non-perm heap which had been referenced from objects in the perm gen > which were no longer reachable, but which tended to act as "roots" keeping > them > alive. It's a general prophylactic in this case, rather than specifically > targeted at an issue > that Keith is seeing (which specific problem, as I indicated, I do not > quite fully understand yet from his original email). > > -- ramki > > > Thanks, > > Fino > > > > On Fri, Apr 4, 2008 at 6:42 PM, Y Srinivas Ramakrishna < > > Y.S.Ramakrishna at sun.com> wrote: > > > > > > > > Hi Keith -- > > > > > > See inline below:- > > > > > > > We are running into issues where ostensibly the memory management > > > > appears OK; less than 1% of the tome is in GC - when I put this file > > > > into HPJmeter 3.1; > > > > > > > > 0.000: [ParNew 47626K->6985K(1883840K), 0.5939434 secs] > > > > 0.613: [Full GC 6985K->6940K(1883840K), 0.7510576 secs] > > > > 288.169: [ParNew 399516K->20399K(1883840K), 3.0827681 secs] > > > > 844.451: [ParNew 412975K->26162K(1883840K), 0.3202843 secs] > > > > 1491.991: [ParNew 418738K->31914K(1883840K), 0.2347086 secs] > > > > 2177.292: [ParNew 424490K->37760K(1883840K), 0.3079626 secs] > > > > 2855.229: [ParNew 430336K->43595K(1883840K), 2.0764301 secs] > > > > 3575.979: [ParNew 436171K->49438K(1883840K), 0.2961466 secs] > > > > 3606.470: [ParNew 69808K->49730K(1883840K), 0.0388510 secs] > > > > 3606.511: [Full GC 49730K->42771K(1883840K), 2.5417084 secs] > > > > 4292.023: [ParNew 435347K->48662K(1883840K), 0.2445446 secs] > > > > 4970.650: [ParNew 441238K->54506K(1883840K), 0.2373110 secs] > > > > 5677.603: [ParNew 447082K->60349K(1883840K), 0.3322904 secs] > > > > 6367.994: [ParNew 452925K->66188K(1883840K), 0.2645763 secs] > > > > 7055.852: [ParNew 458764K->72033K(1883840K), 0.8281927 secs] > > > > 7210.009: [ParNew 167469K->73442K(1883840K), 0.0969525 secs] > > > > 7210.109: [Full GC 73442K->41123K(1883840K), 2.1642088 secs] > > > > 7909.604: [ParNew 433699K->47011K(1883840K), 0.2533163 secs] > > > > 8603.519: [ParNew 439587K->52863K(1883840K), 0.2230794 secs] > > > > 9289.216: [ParNew 445439K->58709K(1883840K), 0.2359698 secs] > > > > 9968.793: [ParNew 451285K->64554K(1883840K), 0.2656911 secs] > > > > 10649.694: [ParNew 457130K->70393K(1883840K), 0.2243246 secs] > > > > 10813.028: [ParNew 158599K->71696K(1883840K), 0.0770400 secs] > > > > 10813.107: [Full GC 71696K->41024K(1883840K), 1.7410828 secs] > > > > 11503.339: [ParNew 433600K->46907K(1883840K), 0.2542805 secs] > > > > 12191.022: [ParNew 439483K->52751K(1883840K), 0.2257059 secs] > > > > 12864.793: [ParNew 445327K->58591K(1883840K), 0.2231573 secs] > > > > 13546.217: [ParNew 451167K->64433K(1883840K), 0.2532376 secs] > > > > 14247.570: [ParNew 457009K->70278K(1883840K), 0.2111731 secs] > > > > 14415.581: [ParNew 168788K->71740K(1883840K), 0.0916532 secs] > > > > 14415.675: [Full GC 71740K->41182K(1883840K), 1.7439608 secs] > > > > 15096.989: [ParNew 433758K->47062K(1883840K), 0.2752132 secs] > > > > 15777.472: [ParNew 439638K->52905K(1883840K), 0.2132059 secs] > > > > 16475.184: [ParNew 445481K->58750K(1883840K), 0.2249407 secs] > > > > 16956.572: [ParNew 451326K->66543K(1883840K), 0.2237252 secs] > > > > 17593.401: [ParNew 459119K->72857K(1883840K), 0.2493865 secs] > > > > 18018.152: [ParNew 313587K->76412K(1883840K), 0.1719212 secs] > > > > 18018.326: [Full GC 76412K->44673K(1883840K), 1.9000112 secs] > > > > 18734.462: [ParNew 437249K->50542K(1883840K), 0.2459797 secs] > > > > 19434.180: [ParNew 443118K->56364K(1883840K), 0.2399764 secs] > > > > 20026.580: [ParNew 448940K->63103K(1883840K), 0.2327731 secs] > > > > 20723.692: [ParNew 455679K->68869K(1883840K), 0.2299928 secs] > > > > 21338.875: [ParNew 461445K->74742K(1883840K), 0.2005874 secs] > > > > 21620.952: [ParNew 269312K->78103K(1883840K), 0.1174351 secs] > > > > 21621.072: [Full GC 78103K->45998K(1883840K), 1.8386129 secs] > > > > 22227.195: [ParNew 438574K->51330K(1883840K), 0.2042002 secs] > > > > 22696.526: [ParNew 443906K->58015K(1883840K), 0.2154086 secs] > > > > 23246.252: [ParNew 450591K->63639K(1883840K), 0.2171688 secs] > > > > 23936.816: [ParNew 456215K->69353K(1883840K), 0.2421265 secs] > > > > 24529.163: [ParNew 461929K->75718K(1883840K), 0.1985638 secs] > > > > 25062.082: [ParNew 468294K->82472K(1883840K), 0.2119384 secs] > > > > 25223.640: [ParNew 205230K->84729K(1883840K), 0.0745738 secs] > > > > 25223.717: [Full GC 84729K->52981K(1883840K), 1.9445841 secs] > > > > 25808.453: [ParNew 445557K->58730K(1883840K), 0.2220857 secs] > > > > 27012.025: [ParNew 450888K->65873K(1883840K), 0.1835305 secs] > > > > 28826.400: [ParNew 194359K->68617K(1883840K), 0.0476450 secs] > > > > 28826.450: [Full GC 68617K->33933K(1883840K), 1.3288466 secs] > > > > 31626.367: [ParNew 426509K->39131K(1883840K), 0.1329507 secs] > > > > 32428.552: [ParNew 79650K->40294K(1883840K), 0.0451805 secs] > > > > 32428.600: [Full GC 40294K->29329K(1883840K), 1.0458070 secs] > > > > 36030.356: [ParNew 157110K->31764K(1883840K), 0.1066607 secs] > > > > 36030.465: [Full GC 31764K->28476K(1883840K), 0.9791810 secs] > > > > 39632.163: [ParNew 96572K->30448K(1883840K), 0.0852053 secs] > > > > 39632.251: [Full GC 30448K->27232K(1883840K), 0.9056725 secs] > > > > 43233.856: [ParNew 215673K->31439K(1883840K), 0.2064516 secs] > > > > 43234.074: [Full GC 31439K->28437K(1883840K), 1.1075595 secs] > > > > 46835.908: [ParNew 302993K->39167K(1883840K), 0.1579830 secs] > > > > 46836.074: [Full GC 39167K->35187K(1883840K), 1.1977157 secs] > > > > 50437.975: [ParNew 233401K->40095K(1883840K), 0.1419100 secs] > > > > 50438.130: [Full GC 40095K->36165K(1883840K), 1.3757682 secs] > > > > 54040.209: [ParNew 47288K->36927K(1883840K), 2.4154908 secs] > > > > 54042.656: [Full GC 36927K->35142K(1883840K), 1.7857094 secs] > > > > 57645.546: [ParNew 48404K->36028K(1883840K), 0.9233543 secs] > > > > 57646.503: [Full GC 36028K->33941K(1883840K), 1.2575880 secs] > > > > 61248.475: [ParNew 62613K->36158K(1883840K), 1.5358356 secs] > > > > 61250.042: [Full GC 36158K->34806K(1883840K), 1.1270633 secs] > > > > 64852.138: [ParNew 89705K->37904K(1883840K), 2.8467706 secs] > > > > 64855.019: [Full GC 37904K->36625K(1883840K), 1.2928314 secs] > > > > > > > > > > Did you notice that towards the end of the log above, your allocation > > > rates > > > have plummetted and the scavenges themselves are taking pretty long? > > > Perhaps that gives you some ideas as to what could be happening? > > > > > > > Here are our VM args: > > > > > > > > -server -Xms1840m -Xmx1840m -Xss256k -XX:+UseConcMarkSweepGC > > > > -XX:NewSize=384m -XX:MaxNewSize=384m -XX:PermSize=256m > > > > -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 > > > -Dsun.rmi.dgc.server.gcInterval=3600000 > > > > -Djava.headless.awt=true -Xloggc:gc.log > > > > > > I'd suggest experimenting with either:- > > > > > > -XX:+ExplicitGCInvokesConcurrent[AndUnloadsClasses] > > > -XX:+CMSClassUnloadingEnabled > > > > > > or, perhaps less desirable, but certainly useful from the > > prespective of > > > your > > > debugging objectives here:- > > > > > > -XX:+DisableExplicitGC -XX:+CMSClassUnloadingEnabled > > > > > > > > > > > We see the DGC working every hour - 3600 seconds apart a ParNew > > > > followed by a Full GC - and there is a plethora of class unloading > > of > > > > the Sun reflection classes since we do a lot of RMI - > > > serialisation/deserialisation. > > > > > > > > Should we increase the frequency of DGC? Not sure why the VM hangs > > - > > > > possibly our client code - but we wanted to exclude completely the > > > > idea that GC is culpable of creating this or contributing to this > > > failure. > > > > > > Check that you are not paging and running slow rather than hanging? > > > > > > When you get the "hung jvm", if on Solaris, try prstat -L -p > > to see > > > if any threads are active, and also try pstack (perhaps several > > > seconds apart, to observe any active threads). If the application > shows > > > no activity (from above), try jstack (or kill -QUIT ) to > > > see if you can elicit a java thread stack dump. > > > > > > (I was not sure from yr description whether you believed the JVM was > > > hung or that the jvm was responding -- for example doing the > occasional > > > gc etc -- but the application response had plummeted.) > > > > > > -- ramki > > > > > > > > > > > thanks > > > > > > > > keith > > > > > > > > > > > > _______________________________________________ > > > > hotspot-gc-use mailing list > > > > hotspot-gc-use at openjdk.java.net > > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > _______________________________________________ > > > hotspot-gc-use mailing list > > > hotspot-gc-use at openjdk.java.net > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > > > > > > -- > > Michael Finocchiaro > > michael.finocchiaro at gmail.com > > Mobile Telephone: +33 6 67 90 64 39 > > MSN: le_fino at hotmail.com > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- Michael Finocchiaro michael.finocchiaro at gmail.com Mobile Telephone: +33 6 67 90 64 39 MSN: le_fino at hotmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080404/4babb660/attachment.html From Y.S.Ramakrishna at Sun.COM Fri Apr 4 12:00:02 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 04 Apr 2008 12:00:02 -0700 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> Message-ID: > OK got it. So with ParallelOldGC, do you get the Perm clean-up > behavior and > heap compaction by default? Yes. [Editorial note for other readers not familiar with CMS: CMS does not, by default, unload classes during concurrent collection cycles: the choice of default is historical but we have stuck with it because of the negative impact we have seen on CMS remark pauses with certain kinds of pause-sensitive applications.] > How can we get heap compaction with CMS? With > -XX:+UseCMSCompactAtFullCollection? Would this clean up old RMI references? > What does this one do: -XX:+CMSCompactWhenClearAllSoftRefs - would it > be > less intrusive? Does it play well with -XX:SoftRefLRUPolicyMSPerMB=1? Unfortunately, with CMS, you do not get heap compaction during concurrent collection cycles. You get it only as a result of compacting full stop-world collections (such as you might get as a result of System.gc() or when there is a concurrent mode failure because of CMS' concurrent collections not keeping up, or because of excessive fragmentation). Note that +UseCMSCompactAtFullCollection is, in fact, the default. It deternmines whether a compacting collection (or a mark-sweep -- but do not compact -- collection) is done in response to System.gc() or upon concurrent mode failure. I can think of almost no situations when you would not go with the default (+) setting of this option. Similarly +CMSCompactWhenClearAllSoftRefs is true by default as well. Both are equally intrusive since they involve a stop world compacting collection (done alas single-threaded). This latter option is obscure enough that you should never need to use it. > > I also asked elsewhere whether there was an equivalent to the AIX > environment variable ALLOCATION_THRESHOLD to warn of large allocations > coming in and threatening to blow the heap. I am afraid I do not know what that variable does w./AIX etc. or what you mean here by "blow the heap". Did you want some way of telling (perhaps in the GC log or by other means) that the application was generating very large allocation requests and wanted to control the size threshold above which you would want such an event reported? -- ramki > > Thanks, > Fino > > On Fri, Apr 4, 2008 at 8:10 PM, Y Srinivas Ramakrishna < > Y.S.Ramakrishna at sun.com> wrote: > > > > > > Can you explain how -XX:+CMSClassUnloadingEnabled is going to > help? I > > > haven't used that parameter before. > > > > The idea is that, assuming concurrent collections happen, classes > will be > > unloaded (and perm gen cleaned) as a result of this flag, and will thus > > make it unnecessary for a full gc to reclaim that storage. > Sometimes, this > > can have the beneficial effect of also cleaning up a bunch of > storage in > > non-perm heap which had been referenced from objects in the perm gen > > which were no longer reachable, but which tended to act as "roots" keeping > > them > > alive. It's a general prophylactic in this case, rather than specifically > > targeted at an issue > > that Keith is seeing (which specific problem, as I indicated, I do not > > quite fully understand yet from his original email). > > > > -- ramki > > > > > Thanks, > > > Fino > > > > > > On Fri, Apr 4, 2008 at 6:42 PM, Y Srinivas Ramakrishna < > > > Y.S.Ramakrishna at sun.com> wrote: > > > > > > > > > > > Hi Keith -- > > > > > > > > See inline below:- > > > > > > > > > We are running into issues where ostensibly the memory management > > > > > appears OK; less than 1% of the tome is in GC - when I put > this file > > > > > into HPJmeter 3.1; > > > > > > > > > > 0.000: [ParNew 47626K->6985K(1883840K), 0.5939434 secs] > > > > > 0.613: [Full GC 6985K->6940K(1883840K), 0.7510576 secs] > > > > > 288.169: [ParNew 399516K->20399K(1883840K), 3.0827681 secs] > > > > > 844.451: [ParNew 412975K->26162K(1883840K), 0.3202843 secs] > > > > > 1491.991: [ParNew 418738K->31914K(1883840K), 0.2347086 secs] > > > > > 2177.292: [ParNew 424490K->37760K(1883840K), 0.3079626 secs] > > > > > 2855.229: [ParNew 430336K->43595K(1883840K), 2.0764301 secs] > > > > > 3575.979: [ParNew 436171K->49438K(1883840K), 0.2961466 secs] > > > > > 3606.470: [ParNew 69808K->49730K(1883840K), 0.0388510 secs] > > > > > 3606.511: [Full GC 49730K->42771K(1883840K), 2.5417084 secs] > > > > > 4292.023: [ParNew 435347K->48662K(1883840K), 0.2445446 secs] > > > > > 4970.650: [ParNew 441238K->54506K(1883840K), 0.2373110 secs] > > > > > 5677.603: [ParNew 447082K->60349K(1883840K), 0.3322904 secs] > > > > > 6367.994: [ParNew 452925K->66188K(1883840K), 0.2645763 secs] > > > > > 7055.852: [ParNew 458764K->72033K(1883840K), 0.8281927 secs] > > > > > 7210.009: [ParNew 167469K->73442K(1883840K), 0.0969525 secs] > > > > > 7210.109: [Full GC 73442K->41123K(1883840K), 2.1642088 secs] > > > > > 7909.604: [ParNew 433699K->47011K(1883840K), 0.2533163 secs] > > > > > 8603.519: [ParNew 439587K->52863K(1883840K), 0.2230794 secs] > > > > > 9289.216: [ParNew 445439K->58709K(1883840K), 0.2359698 secs] > > > > > 9968.793: [ParNew 451285K->64554K(1883840K), 0.2656911 secs] > > > > > 10649.694: [ParNew 457130K->70393K(1883840K), 0.2243246 secs] > > > > > 10813.028: [ParNew 158599K->71696K(1883840K), 0.0770400 secs] > > > > > 10813.107: [Full GC 71696K->41024K(1883840K), 1.7410828 secs] > > > > > 11503.339: [ParNew 433600K->46907K(1883840K), 0.2542805 secs] > > > > > 12191.022: [ParNew 439483K->52751K(1883840K), 0.2257059 secs] > > > > > 12864.793: [ParNew 445327K->58591K(1883840K), 0.2231573 secs] > > > > > 13546.217: [ParNew 451167K->64433K(1883840K), 0.2532376 secs] > > > > > 14247.570: [ParNew 457009K->70278K(1883840K), 0.2111731 secs] > > > > > 14415.581: [ParNew 168788K->71740K(1883840K), 0.0916532 secs] > > > > > 14415.675: [Full GC 71740K->41182K(1883840K), 1.7439608 secs] > > > > > 15096.989: [ParNew 433758K->47062K(1883840K), 0.2752132 secs] > > > > > 15777.472: [ParNew 439638K->52905K(1883840K), 0.2132059 secs] > > > > > 16475.184: [ParNew 445481K->58750K(1883840K), 0.2249407 secs] > > > > > 16956.572: [ParNew 451326K->66543K(1883840K), 0.2237252 secs] > > > > > 17593.401: [ParNew 459119K->72857K(1883840K), 0.2493865 secs] > > > > > 18018.152: [ParNew 313587K->76412K(1883840K), 0.1719212 secs] > > > > > 18018.326: [Full GC 76412K->44673K(1883840K), 1.9000112 secs] > > > > > 18734.462: [ParNew 437249K->50542K(1883840K), 0.2459797 secs] > > > > > 19434.180: [ParNew 443118K->56364K(1883840K), 0.2399764 secs] > > > > > 20026.580: [ParNew 448940K->63103K(1883840K), 0.2327731 secs] > > > > > 20723.692: [ParNew 455679K->68869K(1883840K), 0.2299928 secs] > > > > > 21338.875: [ParNew 461445K->74742K(1883840K), 0.2005874 secs] > > > > > 21620.952: [ParNew 269312K->78103K(1883840K), 0.1174351 secs] > > > > > 21621.072: [Full GC 78103K->45998K(1883840K), 1.8386129 secs] > > > > > 22227.195: [ParNew 438574K->51330K(1883840K), 0.2042002 secs] > > > > > 22696.526: [ParNew 443906K->58015K(1883840K), 0.2154086 secs] > > > > > 23246.252: [ParNew 450591K->63639K(1883840K), 0.2171688 secs] > > > > > 23936.816: [ParNew 456215K->69353K(1883840K), 0.2421265 secs] > > > > > 24529.163: [ParNew 461929K->75718K(1883840K), 0.1985638 secs] > > > > > 25062.082: [ParNew 468294K->82472K(1883840K), 0.2119384 secs] > > > > > 25223.640: [ParNew 205230K->84729K(1883840K), 0.0745738 secs] > > > > > 25223.717: [Full GC 84729K->52981K(1883840K), 1.9445841 secs] > > > > > 25808.453: [ParNew 445557K->58730K(1883840K), 0.2220857 secs] > > > > > 27012.025: [ParNew 450888K->65873K(1883840K), 0.1835305 secs] > > > > > 28826.400: [ParNew 194359K->68617K(1883840K), 0.0476450 secs] > > > > > 28826.450: [Full GC 68617K->33933K(1883840K), 1.3288466 secs] > > > > > 31626.367: [ParNew 426509K->39131K(1883840K), 0.1329507 secs] > > > > > 32428.552: [ParNew 79650K->40294K(1883840K), 0.0451805 secs] > > > > > 32428.600: [Full GC 40294K->29329K(1883840K), 1.0458070 secs] > > > > > 36030.356: [ParNew 157110K->31764K(1883840K), 0.1066607 secs] > > > > > 36030.465: [Full GC 31764K->28476K(1883840K), 0.9791810 secs] > > > > > 39632.163: [ParNew 96572K->30448K(1883840K), 0.0852053 secs] > > > > > 39632.251: [Full GC 30448K->27232K(1883840K), 0.9056725 secs] > > > > > 43233.856: [ParNew 215673K->31439K(1883840K), 0.2064516 secs] > > > > > 43234.074: [Full GC 31439K->28437K(1883840K), 1.1075595 secs] > > > > > 46835.908: [ParNew 302993K->39167K(1883840K), 0.1579830 secs] > > > > > 46836.074: [Full GC 39167K->35187K(1883840K), 1.1977157 secs] > > > > > 50437.975: [ParNew 233401K->40095K(1883840K), 0.1419100 secs] > > > > > 50438.130: [Full GC 40095K->36165K(1883840K), 1.3757682 secs] > > > > > 54040.209: [ParNew 47288K->36927K(1883840K), 2.4154908 secs] > > > > > 54042.656: [Full GC 36927K->35142K(1883840K), 1.7857094 secs] > > > > > 57645.546: [ParNew 48404K->36028K(1883840K), 0.9233543 secs] > > > > > 57646.503: [Full GC 36028K->33941K(1883840K), 1.2575880 secs] > > > > > 61248.475: [ParNew 62613K->36158K(1883840K), 1.5358356 secs] > > > > > 61250.042: [Full GC 36158K->34806K(1883840K), 1.1270633 secs] > > > > > 64852.138: [ParNew 89705K->37904K(1883840K), 2.8467706 secs] > > > > > 64855.019: [Full GC 37904K->36625K(1883840K), 1.2928314 secs] > > > > > > > > > > > > > Did you notice that towards the end of the log above, your allocation > > > > rates > > > > have plummetted and the scavenges themselves are taking pretty long? > > > > Perhaps that gives you some ideas as to what could be happening? > > > > > > > > > Here are our VM args: > > > > > > > > > > -server -Xms1840m -Xmx1840m -Xss256k -XX:+UseConcMarkSweepGC > > > > > -XX:NewSize=384m -XX:MaxNewSize=384m -XX:PermSize=256m > > > > > -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 > > > > -Dsun.rmi.dgc.server.gcInterval=3600000 > > > > > -Djava.headless.awt=true -Xloggc:gc.log > > > > > > > > I'd suggest experimenting with either:- > > > > > > > > -XX:+ExplicitGCInvokesConcurrent[AndUnloadsClasses] > > > > -XX:+CMSClassUnloadingEnabled > > > > > > > > or, perhaps less desirable, but certainly useful from the > > > prespective of > > > > your > > > > debugging objectives here:- > > > > > > > > -XX:+DisableExplicitGC -XX:+CMSClassUnloadingEnabled > > > > > > > > > > > > > > We see the DGC working every hour - 3600 seconds apart a ParNew > > > > > followed by a Full GC - and there is a plethora of class unloading > > > of > > > > > the Sun reflection classes since we do a lot of RMI - > > > > serialisation/deserialisation. > > > > > > > > > > Should we increase the frequency of DGC? Not sure why the VM hangs > > > - > > > > > possibly our client code - but we wanted to exclude completely > the > > > > > idea that GC is culpable of creating this or contributing to this > > > > failure. > > > > > > > > Check that you are not paging and running slow rather than hanging? > > > > > > > > When you get the "hung jvm", if on Solaris, try prstat -L -p > > > to see > > > > if any threads are active, and also try pstack (perhaps several > > > > seconds apart, to observe any active threads). If the application > > shows > > > > no activity (from above), try jstack (or kill -QUIT ) > to > > > > see if you can elicit a java thread stack dump. > > > > > > > > (I was not sure from yr description whether you believed the JVM > was > > > > hung or that the jvm was responding -- for example doing the > > occasional > > > > gc etc -- but the application response had plummeted.) > > > > > > > > -- ramki > > > > > > > > > > > > > > thanks > > > > > > > > > > keith > > > > > > > > > > > > > > > _______________________________________________ > > > > > hotspot-gc-use mailing list > > > > > hotspot-gc-use at openjdk.java.net > > > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > _______________________________________________ > > > > hotspot-gc-use mailing list > > > > hotspot-gc-use at openjdk.java.net > > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > > > > > > > > > > > -- > > > Michael Finocchiaro > > > michael.finocchiaro at gmail.com > > > Mobile Telephone: +33 6 67 90 64 39 > > > MSN: le_fino at hotmail.com > > > _______________________________________________ > > > hotspot-gc-use mailing list > > > hotspot-gc-use at openjdk.java.net > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > -- > Michael Finocchiaro > michael.finocchiaro at gmail.com > Mobile Telephone: +33 6 67 90 64 39 > MSN: le_fino at hotmail.com > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Y.S.Ramakrishna at Sun.COM Fri Apr 4 12:07:49 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 04 Apr 2008 12:07:49 -0700 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> Message-ID: Hi Fino -- I forgot to answer the second of the two questions below:- > > What does this one do: -XX:+CMSCompactWhenClearAllSoftRefs - would > it > > be > > less intrusive? Does it play well with -XX:SoftRefLRUPolicyMSPerMB=1? The two are, in some sense, orthogonal; and, yes, that orthogonality implies that they "play well" together (although I might be able to give a more useful answer if you asked a more specific question regarding your concern about their interaction perhaps; as i said trhe first flag above is obscure enough that you should not really need to worry about it, IMO). -- ramki From michael.finocchiaro at gmail.com Fri Apr 4 12:17:08 2008 From: michael.finocchiaro at gmail.com (Michael Finocchiaro) Date: Fri, 4 Apr 2008 21:17:08 +0200 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> Message-ID: <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> > I am afraid I do not know what that variable does w./AIX etc. or what you mean > here by "blow the heap". Did you want some way of telling (perhaps in the GC log > or by other means) that the application was generating very large allocation > requests and wanted to control the size threshold above which you would want > such an event reported? Yes, there is an environment variable on AIX called ALLOCATION_THRESHOLD= that when set will have the JVM send on error to stdout (or stderr not sure) when an allocation greater than occurs. This could be useful, well at least for us, when dealing with code that sends back way too much data. By the heap blowing up, I mean a sudden huge jump in memory consumption from which the heap may have a real hard time recovering if at all. Even better would be a flag that threw a stack trace when that allocation occurs. Back to the CMS vs. ParallelOldGC, is there a technical description anywhere on the difference between these two Old Generation Parallel Collectors? I know that they are tightly associated with their respective Young Generation Collectors (UseParNewGC and ParallelGC respectively) but beyond that, I am not sure I understand what ParallelOldGC does. Thanks, Fino On Fri, Apr 4, 2008 at 9:00 PM, Y Srinivas Ramakrishna < Y.S.Ramakrishna at sun.com> wrote: > > > OK got it. So with ParallelOldGC, do you get the Perm clean-up > > behavior and > > heap compaction by default? > > Yes. [Editorial note for other readers not familiar with CMS: CMS does > not, > by default, unload classes during concurrent collection cycles: the choice > of > default is historical but we have stuck with it because of the negative > impact > we have seen on CMS remark pauses with certain kinds of pause-sensitive > applications.] > > > How can we get heap compaction with CMS? With > > -XX:+UseCMSCompactAtFullCollection? Would this clean up old RMI > references? > > What does this one do: -XX:+CMSCompactWhenClearAllSoftRefs - would it > > be > > less intrusive? Does it play well with -XX:SoftRefLRUPolicyMSPerMB=1? > > Unfortunately, with CMS, you do not get heap compaction during concurrent > collection cycles. You get it only as a result of compacting full > stop-world > collections (such as you might get as a result of System.gc() or when > there is > a concurrent mode failure because of CMS' concurrent collections not > keeping up, > or because of excessive fragmentation). > > Note that +UseCMSCompactAtFullCollection is, in fact, the default. It > deternmines > whether a compacting collection (or a mark-sweep -- but do not compact -- > collection) > is done in response to System.gc() or upon concurrent mode failure. I can > think > of almost no situations when you would not go with the default (+) setting > of this option. > > Similarly +CMSCompactWhenClearAllSoftRefs is true by default as well. Both > are equally > intrusive since they involve a stop world compacting collection (done alas > single-threaded). > This latter option is obscure enough that you should never need to use it. > > > > > I also asked elsewhere whether there was an equivalent to the AIX > > environment variable ALLOCATION_THRESHOLD to warn of large allocations > > coming in and threatening to blow the heap. > > I am afraid I do not know what that variable does w./AIX etc. or what you > mean > here by "blow the heap". Did you want some way of telling (perhaps in the > GC log > or by other means) that the application was generating very large > allocation > requests and wanted to control the size threshold above which you would > want > such an event reported? > > -- ramki > > > > > Thanks, > > Fino > > > > On Fri, Apr 4, 2008 at 8:10 PM, Y Srinivas Ramakrishna < > > Y.S.Ramakrishna at sun.com> wrote: > > > > > > > > > Can you explain how -XX:+CMSClassUnloadingEnabled is going to > > help? I > > > > haven't used that parameter before. > > > > > > The idea is that, assuming concurrent collections happen, classes > > will be > > > unloaded (and perm gen cleaned) as a result of this flag, and will > thus > > > make it unnecessary for a full gc to reclaim that storage. > > Sometimes, this > > > can have the beneficial effect of also cleaning up a bunch of > > storage in > > > non-perm heap which had been referenced from objects in the perm gen > > > which were no longer reachable, but which tended to act as "roots" > keeping > > > them > > > alive. It's a general prophylactic in this case, rather than > specifically > > > targeted at an issue > > > that Keith is seeing (which specific problem, as I indicated, I do not > > > quite fully understand yet from his original email). > > > > > > -- ramki > > > > > > > Thanks, > > > > Fino > > > > > > > > On Fri, Apr 4, 2008 at 6:42 PM, Y Srinivas Ramakrishna < > > > > Y.S.Ramakrishna at sun.com> wrote: > > > > > > > > > > > > > > Hi Keith -- > > > > > > > > > > See inline below:- > > > > > > > > > > > We are running into issues where ostensibly the memory > management > > > > > > appears OK; less than 1% of the tome is in GC - when I put > > this file > > > > > > into HPJmeter 3.1; > > > > > > > > > > > > 0.000: [ParNew 47626K->6985K(1883840K), 0.5939434 secs] > > > > > > 0.613: [Full GC 6985K->6940K(1883840K), 0.7510576 secs] > > > > > > 288.169: [ParNew 399516K->20399K(1883840K), 3.0827681 secs] > > > > > > 844.451: [ParNew 412975K->26162K(1883840K), 0.3202843 secs] > > > > > > 1491.991: [ParNew 418738K->31914K(1883840K), 0.2347086 secs] > > > > > > 2177.292: [ParNew 424490K->37760K(1883840K), 0.3079626 secs] > > > > > > 2855.229: [ParNew 430336K->43595K(1883840K), 2.0764301 secs] > > > > > > 3575.979: [ParNew 436171K->49438K(1883840K), 0.2961466 secs] > > > > > > 3606.470: [ParNew 69808K->49730K(1883840K), 0.0388510 secs] > > > > > > 3606.511: [Full GC 49730K->42771K(1883840K), 2.5417084 secs] > > > > > > 4292.023: [ParNew 435347K->48662K(1883840K), 0.2445446 secs] > > > > > > 4970.650: [ParNew 441238K->54506K(1883840K), 0.2373110 secs] > > > > > > 5677.603: [ParNew 447082K->60349K(1883840K), 0.3322904 secs] > > > > > > 6367.994: [ParNew 452925K->66188K(1883840K), 0.2645763 secs] > > > > > > 7055.852: [ParNew 458764K->72033K(1883840K), 0.8281927 secs] > > > > > > 7210.009: [ParNew 167469K->73442K(1883840K), 0.0969525 secs] > > > > > > 7210.109: [Full GC 73442K->41123K(1883840K), 2.1642088 secs] > > > > > > 7909.604: [ParNew 433699K->47011K(1883840K), 0.2533163 secs] > > > > > > 8603.519: [ParNew 439587K->52863K(1883840K), 0.2230794 secs] > > > > > > 9289.216: [ParNew 445439K->58709K(1883840K), 0.2359698 secs] > > > > > > 9968.793: [ParNew 451285K->64554K(1883840K), 0.2656911 secs] > > > > > > 10649.694: [ParNew 457130K->70393K(1883840K), 0.2243246 secs] > > > > > > 10813.028: [ParNew 158599K->71696K(1883840K), 0.0770400 secs] > > > > > > 10813.107: [Full GC 71696K->41024K(1883840K), 1.7410828 secs] > > > > > > 11503.339: [ParNew 433600K->46907K(1883840K), 0.2542805 secs] > > > > > > 12191.022: [ParNew 439483K->52751K(1883840K), 0.2257059 secs] > > > > > > 12864.793: [ParNew 445327K->58591K(1883840K), 0.2231573 secs] > > > > > > 13546.217: [ParNew 451167K->64433K(1883840K), 0.2532376 secs] > > > > > > 14247.570: [ParNew 457009K->70278K(1883840K), 0.2111731 secs] > > > > > > 14415.581: [ParNew 168788K->71740K(1883840K), 0.0916532 secs] > > > > > > 14415.675: [Full GC 71740K->41182K(1883840K), 1.7439608 secs] > > > > > > 15096.989: [ParNew 433758K->47062K(1883840K), 0.2752132 secs] > > > > > > 15777.472: [ParNew 439638K->52905K(1883840K), 0.2132059 secs] > > > > > > 16475.184: [ParNew 445481K->58750K(1883840K), 0.2249407 secs] > > > > > > 16956.572: [ParNew 451326K->66543K(1883840K), 0.2237252 secs] > > > > > > 17593.401: [ParNew 459119K->72857K(1883840K), 0.2493865 secs] > > > > > > 18018.152: [ParNew 313587K->76412K(1883840K), 0.1719212 secs] > > > > > > 18018.326: [Full GC 76412K->44673K(1883840K), 1.9000112 secs] > > > > > > 18734.462: [ParNew 437249K->50542K(1883840K), 0.2459797 secs] > > > > > > 19434.180: [ParNew 443118K->56364K(1883840K), 0.2399764 secs] > > > > > > 20026.580: [ParNew 448940K->63103K(1883840K), 0.2327731 secs] > > > > > > 20723.692: [ParNew 455679K->68869K(1883840K), 0.2299928 secs] > > > > > > 21338.875: [ParNew 461445K->74742K(1883840K), 0.2005874 secs] > > > > > > 21620.952: [ParNew 269312K->78103K(1883840K), 0.1174351 secs] > > > > > > 21621.072: [Full GC 78103K->45998K(1883840K), 1.8386129 secs] > > > > > > 22227.195: [ParNew 438574K->51330K(1883840K), 0.2042002 secs] > > > > > > 22696.526: [ParNew 443906K->58015K(1883840K), 0.2154086 secs] > > > > > > 23246.252: [ParNew 450591K->63639K(1883840K), 0.2171688 secs] > > > > > > 23936.816: [ParNew 456215K->69353K(1883840K), 0.2421265 secs] > > > > > > 24529.163: [ParNew 461929K->75718K(1883840K), 0.1985638 secs] > > > > > > 25062.082: [ParNew 468294K->82472K(1883840K), 0.2119384 secs] > > > > > > 25223.640: [ParNew 205230K->84729K(1883840K), 0.0745738 secs] > > > > > > 25223.717: [Full GC 84729K->52981K(1883840K), 1.9445841 secs] > > > > > > 25808.453: [ParNew 445557K->58730K(1883840K), 0.2220857 secs] > > > > > > 27012.025: [ParNew 450888K->65873K(1883840K), 0.1835305 secs] > > > > > > 28826.400: [ParNew 194359K->68617K(1883840K), 0.0476450 secs] > > > > > > 28826.450: [Full GC 68617K->33933K(1883840K), 1.3288466 secs] > > > > > > 31626.367: [ParNew 426509K->39131K(1883840K), 0.1329507 secs] > > > > > > 32428.552: [ParNew 79650K->40294K(1883840K), 0.0451805 secs] > > > > > > 32428.600: [Full GC 40294K->29329K(1883840K), 1.0458070 secs] > > > > > > 36030.356: [ParNew 157110K->31764K(1883840K), 0.1066607 secs] > > > > > > 36030.465: [Full GC 31764K->28476K(1883840K), 0.9791810 secs] > > > > > > 39632.163: [ParNew 96572K->30448K(1883840K), 0.0852053 secs] > > > > > > 39632.251: [Full GC 30448K->27232K(1883840K), 0.9056725 secs] > > > > > > 43233.856: [ParNew 215673K->31439K(1883840K), 0.2064516 secs] > > > > > > 43234.074: [Full GC 31439K->28437K(1883840K), 1.1075595 secs] > > > > > > 46835.908: [ParNew 302993K->39167K(1883840K), 0.1579830 secs] > > > > > > 46836.074: [Full GC 39167K->35187K(1883840K), 1.1977157 secs] > > > > > > 50437.975: [ParNew 233401K->40095K(1883840K), 0.1419100 secs] > > > > > > 50438.130: [Full GC 40095K->36165K(1883840K), 1.3757682 secs] > > > > > > 54040.209: [ParNew 47288K->36927K(1883840K), 2.4154908 secs] > > > > > > 54042.656: [Full GC 36927K->35142K(1883840K), 1.7857094 secs] > > > > > > 57645.546: [ParNew 48404K->36028K(1883840K), 0.9233543 secs] > > > > > > 57646.503: [Full GC 36028K->33941K(1883840K), 1.2575880 secs] > > > > > > 61248.475: [ParNew 62613K->36158K(1883840K), 1.5358356 secs] > > > > > > 61250.042: [Full GC 36158K->34806K(1883840K), 1.1270633 secs] > > > > > > 64852.138: [ParNew 89705K->37904K(1883840K), 2.8467706 secs] > > > > > > 64855.019: [Full GC 37904K->36625K(1883840K), 1.2928314 secs] > > > > > > > > > > > > > > > > Did you notice that towards the end of the log above, your > allocation > > > > > rates > > > > > have plummetted and the scavenges themselves are taking pretty > long? > > > > > Perhaps that gives you some ideas as to what could be happening? > > > > > > > > > > > Here are our VM args: > > > > > > > > > > > > -server -Xms1840m -Xmx1840m -Xss256k -XX:+UseConcMarkSweepGC > > > > > > -XX:NewSize=384m -XX:MaxNewSize=384m -XX:PermSize=256m > > > > > > -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 > > > > > -Dsun.rmi.dgc.server.gcInterval=3600000 > > > > > > -Djava.headless.awt=true -Xloggc:gc.log > > > > > > > > > > I'd suggest experimenting with either:- > > > > > > > > > > -XX:+ExplicitGCInvokesConcurrent[AndUnloadsClasses] > > > > > -XX:+CMSClassUnloadingEnabled > > > > > > > > > > or, perhaps less desirable, but certainly useful from the > > > > prespective of > > > > > your > > > > > debugging objectives here:- > > > > > > > > > > -XX:+DisableExplicitGC -XX:+CMSClassUnloadingEnabled > > > > > > > > > > > > > > > > > We see the DGC working every hour - 3600 seconds apart a ParNew > > > > > > followed by a Full GC - and there is a plethora of class > unloading > > > > of > > > > > > the Sun reflection classes since we do a lot of RMI - > > > > > serialisation/deserialisation. > > > > > > > > > > > > Should we increase the frequency of DGC? Not sure why the VM > hangs > > > > - > > > > > > possibly our client code - but we wanted to exclude completely > > the > > > > > > idea that GC is culpable of creating this or contributing to > this > > > > > failure. > > > > > > > > > > Check that you are not paging and running slow rather than > hanging? > > > > > > > > > > When you get the "hung jvm", if on Solaris, try prstat -L -p > > > > to see > > > > > if any threads are active, and also try pstack (perhaps > several > > > > > seconds apart, to observe any active threads). If the application > > > shows > > > > > no activity (from above), try jstack (or kill -QUIT ) > > to > > > > > see if you can elicit a java thread stack dump. > > > > > > > > > > (I was not sure from yr description whether you believed the JVM > > was > > > > > hung or that the jvm was responding -- for example doing the > > > occasional > > > > > gc etc -- but the application response had plummeted.) > > > > > > > > > > -- ramki > > > > > > > > > > > > > > > > > thanks > > > > > > > > > > > > keith > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > hotspot-gc-use mailing list > > > > > > hotspot-gc-use at openjdk.java.net > > > > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > _______________________________________________ > > > > > hotspot-gc-use mailing list > > > > > hotspot-gc-use at openjdk.java.net > > > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > > > > > > > > > > > > > > > > -- > > > > Michael Finocchiaro > > > > michael.finocchiaro at gmail.com > > > > Mobile Telephone: +33 6 67 90 64 39 > > > > MSN: le_fino at hotmail.com > > > > _______________________________________________ > > > > hotspot-gc-use mailing list > > > > hotspot-gc-use at openjdk.java.net > > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > > > > > > -- > > Michael Finocchiaro > > michael.finocchiaro at gmail.com > > Mobile Telephone: +33 6 67 90 64 39 > > MSN: le_fino at hotmail.com > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- Michael Finocchiaro michael.finocchiaro at gmail.com Mobile Telephone: +33 6 67 90 64 39 MSN: le_fino at hotmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080404/c3ed9a7f/attachment.html From Y.S.Ramakrishna at Sun.COM Fri Apr 4 12:31:07 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 04 Apr 2008 12:31:07 -0700 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> Message-ID: ... > Back to the CMS vs. ParallelOldGC, is there a technical description anywhere > on the difference between these two Old Generation Parallel > Collectors? I > know that they are tightly associated with their respective Young Generation > Collectors (UseParNewGC and ParallelGC respectively) but beyond that, > I am > not sure I understand what ParallelOldGC does. Unfortunately, there is no parallel compacting collector for CMS in the current product. See Jon's blog for a diagram illustrating the way these collectors are structured/connected:- http://blogs.sun.com/jonthecollector/entry/our_collectors -- ramki From Y.S.Ramakrishna at Sun.COM Fri Apr 4 12:40:48 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 04 Apr 2008 12:40:48 -0700 Subject: Reporting large Java heap allocations (was Re: RMI Activity Threads Lock GC o/p) In-Reply-To: <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> Message-ID: Hi Fino -- > Yes, there is an environment variable on AIX called > ALLOCATION_THRESHOLD= that when set will have the JVM > send on > error to stdout (or stderr not sure) when an allocation greater than in bytes> occurs. This could be useful, well at least for us, when dealing > with code that sends back way too much data. By the heap blowing up, I > mean > a sudden huge jump in memory consumption from which the heap may have > a real > hard time recovering if at all. Even better would be a flag that threw > a > stack trace when that allocation occurs. ... I don't think there is a jvm (or env option) w/Hotspot that would do that (but i may be wrong). However, I also wonder whether (with appropriate caveats, see below, on appropriate platforms) one might be able to elicit (an approximation to) that data (at sufficiently large values of ) by leveraging dtrace; someone on hotspot-gc-runtime at o.j.n might be able to help, so i have cross-posted to that list. -- ramki From Y.S.Ramakrishna at Sun.COM Fri Apr 4 12:43:16 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 04 Apr 2008 12:43:16 -0700 Subject: Reporting large Java heap allocations (was Re: RMI Activity Threads Lock GC o/p) In-Reply-To: References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> Message-ID: Sorry, corrected the runtime list address below. Apologies for the resulting clutter. -- ramki ----- Original Message ----- From: Y Srinivas Ramakrishna Date: Friday, April 4, 2008 12:40 pm Subject: Reporting large Java heap allocations (was Re: RMI Activity Threads Lock GC o/p) To: Michael Finocchiaro Cc: Keith Holdaway , "hotspot-gc-use at openjdk.java.net" , hotspot-runtime-dev at openjdk.java.net > Hi Fino -- > > > Yes, there is an environment variable on AIX called > > ALLOCATION_THRESHOLD= that when set will have the JVM > > > send on > > error to stdout (or stderr not sure) when an allocation greater than > > in bytes> occurs. This could be useful, well at least for us, when dealing > > with code that sends back way too much data. By the heap blowing up, > I > > mean > > a sudden huge jump in memory consumption from which the heap may > have > > a real > > hard time recovering if at all. Even better would be a flag that > threw > > a > > stack trace when that allocation occurs. > > ... > > I don't think there is a jvm (or env option) w/Hotspot that would do that > (but i may be wrong). > > However, I also wonder whether (with appropriate caveats, see below, > on appropriate > platforms) one might be able to elicit (an approximation to) that data > (at > sufficiently large values of ) by leveraging dtrace; someone > on hotspot-runtime-dev at o.j.n might be able to help, so i have cross-posted > to that list. > > -- ramki > From Y.S.Ramakrishna at Sun.COM Fri Apr 4 12:50:39 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 04 Apr 2008 12:50:39 -0700 Subject: Reporting large Java heap allocations (was Re: RMI Activity Threads Lock GC o/p) In-Reply-To: References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> Message-ID: By the way, i am assuming you mean a single large allocation here, not the total heap occupancy exceeding some specified threahold. (For the latter, recall that the JVM Management & Monitoring JMX API's do allow for some form of reportage when the heap occupancy exceeds a certain threshold, but I am guessing you do not have that in mind here). -- ramki ----- Original Message ----- From: Y Srinivas Ramakrishna Date: Friday, April 4, 2008 12:45 pm Subject: Re: Reporting large Java heap allocations (was Re: RMI Activity Threads Lock GC o/p) To: Michael Finocchiaro Cc: Keith Holdaway , hotspot--runtime-dev at openjdk.java.net, "hotspot-gc-use at openjdk.java.net" > Sorry, corrected the runtime list address below. Apologies for the > resulting clutter. > > -- ramki > > ----- Original Message ----- > From: Y Srinivas Ramakrishna > Date: Friday, April 4, 2008 12:40 pm > Subject: Reporting large Java heap allocations (was Re: RMI Activity > Threads Lock GC o/p) > To: Michael Finocchiaro > Cc: Keith Holdaway , > "hotspot-gc-use at openjdk.java.net" , hotspot-runtime-dev at openjdk.java.net > > > > Hi Fino -- > > > > > Yes, there is an environment variable on AIX called > > > ALLOCATION_THRESHOLD= that when set will have the > JVM > > > > > send on > > > error to stdout (or stderr not sure) when an allocation greater > than > > > > in bytes> occurs. This could be useful, well at least for us, when > dealing > > > with code that sends back way too much data. By the heap blowing > up, > > I > > > mean > > > a sudden huge jump in memory consumption from which the heap may > > have > > > a real > > > hard time recovering if at all. Even better would be a flag that > > threw > > > a > > > stack trace when that allocation occurs. > > > > ... > > > > I don't think there is a jvm (or env option) w/Hotspot that would do > that > > (but i may be wrong). > > > > However, I also wonder whether (with appropriate caveats, see below, > > > on appropriate > > platforms) one might be able to elicit (an approximation to) that > data > > (at > > sufficiently large values of ) by leveraging dtrace; > someone > > on hotspot-runtime-dev at o.j.n might be able to help, so i have cross-posted > > to that list. > > > > -- ramki > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Y.S.Ramakrishna at Sun.COM Fri Apr 4 12:53:50 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 04 Apr 2008 12:53:50 -0700 Subject: Reporting large Java heap allocations (was Re: RMI Activity Threads Lock GC o/p) In-Reply-To: References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> Message-ID: By the way, i am assuming you mean a single large allocation here, not the total heap occupancy exceeding some specified threahold. (For the latter, recall that the JVM Management & Monitoring JMX API's do allow for some form of reportage when the heap occupancy exceeds a certain threshold, but I am guessing you do not have that in mind here). -- ramki ----- Original Message ----- From: Y Srinivas Ramakrishna Date: Friday, April 4, 2008 12:45 pm Subject: Re: Reporting large Java heap allocations (was Re: RMI Activity Threads Lock GC o/p) To: Michael Finocchiaro Cc: Keith Holdaway , hotspot--runtime-dev at openjdk.java.net, "hotspot-gc-use at openjdk.java.net" > Sorry, corrected the runtime list address below. Apologies for the > resulting clutter. > > -- ramki > > ----- Original Message ----- > From: Y Srinivas Ramakrishna > Date: Friday, April 4, 2008 12:40 pm > Subject: Reporting large Java heap allocations (was Re: RMI Activity > Threads Lock GC o/p) > To: Michael Finocchiaro > Cc: Keith Holdaway , > "hotspot-gc-use at openjdk.java.net" , hotspot-runtime-dev at openjdk.java.net > > > > Hi Fino -- > > > > > Yes, there is an environment variable on AIX called > > > ALLOCATION_THRESHOLD= that when set will have the > JVM > > > > > send on > > > error to stdout (or stderr not sure) when an allocation greater > than > > > > in bytes> occurs. This could be useful, well at least for us, when > dealing > > > with code that sends back way too much data. By the heap blowing > up, > > I > > > mean > > > a sudden huge jump in memory consumption from which the heap may > > have > > > a real > > > hard time recovering if at all. Even better would be a flag that > > threw > > > a > > > stack trace when that allocation occurs. > > > > ... > > > > I don't think there is a jvm (or env option) w/Hotspot that would do > that > > (but i may be wrong). > > > > However, I also wonder whether (with appropriate caveats, see below, > > > on appropriate > > platforms) one might be able to elicit (an approximation to) that > data > > (at > > sufficiently large values of ) by leveraging dtrace; > someone > > on hotspot-runtime-dev at o.j.n might be able to help, so i have cross-posted > > to that list. > > > > -- ramki > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From michael.finocchiaro at gmail.com Fri Apr 4 12:57:14 2008 From: michael.finocchiaro at gmail.com (Michael Finocchiaro) Date: Fri, 4 Apr 2008 21:57:14 +0200 Subject: Reporting large Java heap allocations (was Re: RMI Activity Threads Lock GC o/p) In-Reply-To: References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> Message-ID: <8b61e5430804041257s7d1ff355t2d63e0cf521975e1@mail.gmail.com> I am talking about a single allocation exceeding some preset limit (see the URL in the original post as to how IBM/s JVM does it). Fino On Fri, Apr 4, 2008 at 9:50 PM, Y Srinivas Ramakrishna < Y.S.Ramakrishna at sun.com> wrote: > By the way, i am assuming you mean a single large allocation here, > not the total heap occupancy exceeding some specified threahold. > (For the latter, recall that the JVM Management & Monitoring JMX API's do > allow for > some form of reportage when the heap occupancy exceeds > a certain threshold, but I am guessing you do not have that in mind here). > > -- ramki > > ----- Original Message ----- > From: Y Srinivas Ramakrishna > Date: Friday, April 4, 2008 12:45 pm > Subject: Re: Reporting large Java heap allocations (was Re: RMI Activity > Threads Lock GC o/p) > To: Michael Finocchiaro > Cc: Keith Holdaway , > hotspot--runtime-dev at openjdk.java.net, "hotspot-gc-use at openjdk.java.net" < > hotspot-gc-use at openjdk.java.net> > > > > Sorry, corrected the runtime list address below. Apologies for the > > resulting clutter. > > > > -- ramki > > > > ----- Original Message ----- > > From: Y Srinivas Ramakrishna > > Date: Friday, April 4, 2008 12:40 pm > > Subject: Reporting large Java heap allocations (was Re: RMI Activity > > Threads Lock GC o/p) > > To: Michael Finocchiaro > > Cc: Keith Holdaway , > > "hotspot-gc-use at openjdk.java.net" , > hotspot-runtime-dev at openjdk.java.net > > > > > > > Hi Fino -- > > > > > > > Yes, there is an environment variable on AIX called > > > > ALLOCATION_THRESHOLD= that when set will have the > > JVM > > > > > > > send on > > > > error to stdout (or stderr not sure) when an allocation greater > > than > > > > > > in bytes> occurs. This could be useful, well at least for us, when > > dealing > > > > with code that sends back way too much data. By the heap blowing > > up, > > > I > > > > mean > > > > a sudden huge jump in memory consumption from which the heap may > > > have > > > > a real > > > > hard time recovering if at all. Even better would be a flag that > > > threw > > > > a > > > > stack trace when that allocation occurs. > > > > > > ... > > > > > > I don't think there is a jvm (or env option) w/Hotspot that would do > > that > > > (but i may be wrong). > > > > > > However, I also wonder whether (with appropriate caveats, see below, > > > > > on appropriate > > > platforms) one might be able to elicit (an approximation to) that > > data > > > (at > > > sufficiently large values of ) by leveraging dtrace; > > someone > > > on hotspot-runtime-dev at o.j.n might be able to help, so i have > cross-posted > > > to that list. > > > > > > -- ramki > > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- Michael Finocchiaro michael.finocchiaro at gmail.com Mobile Telephone: +33 6 67 90 64 39 MSN: le_fino at hotmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080404/e07d46eb/attachment.html From Jon.Masamitsu at Sun.COM Fri Apr 4 13:40:15 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 04 Apr 2008 13:40:15 -0700 Subject: RMI Activity Threads Lock GC o/p In-Reply-To: References: <304E9E55F6A4BE4B910C2437D4D1B49609070F0DF3@MERCMBX14.na.sas.com> <8b61e5430804041102j68505caey656baedefb35b120@mail.gmail.com> <8b61e5430804041142s4d15a387pe7b9c3d9270bb147@mail.gmail.com> <8b61e5430804041217q7fee7753pbcfdbf294f7471c0@mail.gmail.com> Message-ID: <47F6922F.2090002@sun.com> You might also find this one useful. It's a description of the UseParallelOldGC collector. http://blogs.sun.com/jonthecollector/entry/more_of_a_good_thing Y Srinivas Ramakrishna wrote On 04/04/08 12:31,: >... > > > >>Back to the CMS vs. ParallelOldGC, is there a technical description anywhere >>on the difference between these two Old Generation Parallel >>Collectors? I >>know that they are tightly associated with their respective Young Generation >>Collectors (UseParNewGC and ParallelGC respectively) but beyond that, >>I am >>not sure I understand what ParallelOldGC does. >> >> > >Unfortunately, there is no parallel compacting collector for CMS in the current product. See >Jon's blog for a diagram illustrating the way these collectors are structured/connected:- > >http://blogs.sun.com/jonthecollector/entry/our_collectors > >-- ramki >_______________________________________________ >hotspot-gc-use mailing list >hotspot-gc-use at openjdk.java.net >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > From jamesnichols3 at gmail.com Mon Apr 14 11:30:38 2008 From: jamesnichols3 at gmail.com (James Nichols) Date: Mon, 14 Apr 2008 14:30:38 -0400 Subject: System.gc() still resulting in garbage collections with -XX:+DisableExplicitGC Message-ID: <83a51e120804141130j391a3277ma4fe27942261cdee@mail.gmail.com> Hello, I'm running with the following JVM arguments: -server -Xms4096m -Xmx4096m -XX:NewSize=1228M -XX:MaxNewSize=1228M -XX:MaxTenuringThreshold=4 -XX:SurvivorRatio=6 -XX:+ScavengeBeforeFullGC -XX:PermSize=256M -XX:MaxPermSize=256M -XX:-UseConcMarkSweepGC -XX:+UseParNewGC -XX:ParallelGCThreads=3 -XX:+CMSParallelRemarkEnabled -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -verbosegc -XX:+DisableExplicitGC -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -XX:+PrintClassHistogram -Xloggc:/var/log/jboss/gc.dat -Dsun.net.client.defaultConnectTimeout=10000 Using Jstat, I see a bunch of System.gc() calls showing up: Timestamp S0 S1 E O P YGC YGCT FGC FGCT GCT LGCC GCC 189869.1 0.00 0.00 1.80 21.36 89.74 3109 15.401 3012 288.886 304.288 System.gc() No GC Any ideas as to why I'm still getting these even though I have -XX:+DisableExplicitGC set? Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080414/79f99a1d/attachment.html From Y.S.Ramakrishna at Sun.COM Mon Apr 14 12:00:07 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Mon, 14 Apr 2008 12:00:07 -0700 Subject: System.gc() still resulting in garbage collections with -XX:+DisableExplicitGC In-Reply-To: <83a51e120804141130j391a3277ma4fe27942261cdee@mail.gmail.com> References: <83a51e120804141130j391a3277ma4fe27942261cdee@mail.gmail.com> Message-ID: That is weird and unexpected (and could well be instrumentation error). Could you post a snippet of the GC log (/var/log/jboss/gc.dat) showing a portion where such System.gc()'s may be occurring? -- ramki ----- Original Message ----- From: James Nichols Date: Monday, April 14, 2008 11:31 am Subject: System.gc() still resulting in garbage collections with -XX:+DisableExplicitGC To: hotspot-gc-use at openjdk.java.net > Hello, > > I'm running with the following JVM arguments: > > -server -Xms4096m -Xmx4096m -XX:NewSize=1228M -XX:MaxNewSize=1228M > -XX:MaxTenuringThreshold=4 -XX:SurvivorRatio=6 -XX:+ScavengeBeforeFullGC > -XX:PermSize=256M -XX:MaxPermSize=256M -XX:-UseConcMarkSweepGC > -XX:+UseParNewGC -XX:ParallelGCThreads=3 -XX:+CMSParallelRemarkEnabled > -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -verbosegc > -XX:+DisableExplicitGC -XX:+PrintTenuringDistribution -XX:+PrintGCDetails > -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC > -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime > -XX:+PrintClassHistogram -Xloggc:/var/log/jboss/gc.dat > -Dsun.net.client.defaultConnectTimeout=10000 > > Using Jstat, I see a bunch of System.gc() calls showing up: > > Timestamp S0 S1 E O P YGC YGCT FGC > FGCT GCT LGCC GCC > 189869.1 0.00 0.00 1.80 21.36 89.74 3109 15.401 3012 > 288.886 304.288 System.gc() No GC > > > Any ideas as to why I'm still getting these even though I have > -XX:+DisableExplicitGC set? > > Jim > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From jamesnichols3 at gmail.com Mon Apr 14 12:34:24 2008 From: jamesnichols3 at gmail.com (James Nichols) Date: Mon, 14 Apr 2008 15:34:24 -0400 Subject: System.gc() still resulting in garbage collections with -XX:+DisableExplicitGC In-Reply-To: References: <83a51e120804141130j391a3277ma4fe27942261cdee@mail.gmail.com> Message-ID: <83a51e120804141234m54e58addj1aadfb97d67880b8@mail.gmail.com> Ok, this was PEBKAC. I recently installed the JBoss ON agent (which is Java) and was running jstat on that, not my Jboss application server. You did help though, since I started looking at my gc.dat log closely and it wasn't jiving with what I was seeing in Jstat. Thanks!!! Jim On Mon, Apr 14, 2008 at 3:00 PM, Y Srinivas Ramakrishna < Y.S.Ramakrishna at sun.com> wrote: > That is weird and unexpected (and could well be instrumentation error). > Could you post a > snippet of the GC log (/var/log/jboss/gc.dat) showing a portion where such > System.gc()'s may > be occurring? > > -- ramki > > ----- Original Message ----- > From: James Nichols > Date: Monday, April 14, 2008 11:31 am > Subject: System.gc() still resulting in garbage collections with > -XX:+DisableExplicitGC > To: hotspot-gc-use at openjdk.java.net > > > > Hello, > > > > I'm running with the following JVM arguments: > > > > -server -Xms4096m -Xmx4096m -XX:NewSize=1228M -XX:MaxNewSize=1228M > > -XX:MaxTenuringThreshold=4 -XX:SurvivorRatio=6 -XX:+ScavengeBeforeFullGC > > -XX:PermSize=256M -XX:MaxPermSize=256M -XX:-UseConcMarkSweepGC > > -XX:+UseParNewGC -XX:ParallelGCThreads=3 -XX:+CMSParallelRemarkEnabled > > -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -verbosegc > > -XX:+DisableExplicitGC -XX:+PrintTenuringDistribution > -XX:+PrintGCDetails > > -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC > > -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime > > -XX:+PrintClassHistogram -Xloggc:/var/log/jboss/gc.dat > > -Dsun.net.client.defaultConnectTimeout=10000 > > > > Using Jstat, I see a bunch of System.gc() calls showing up: > > > > Timestamp S0 S1 E O P YGC YGCT FGC > > FGCT GCT LGCC GCC > > 189869.1 0.00 0.00 1.80 21.36 89.74 3109 15.401 3012 > > 288.886 304.288 System.gc() No GC > > > > > > Any ideas as to why I'm still getting these even though I have > > -XX:+DisableExplicitGC set? > > > > Jim > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080414/1d786479/attachment.html From Keith.Holdaway at sas.com Tue Apr 15 12:42:13 2008 From: Keith.Holdaway at sas.com (Keith Holdaway) Date: Tue, 15 Apr 2008 15:42:13 -0400 Subject: Negative durations? In-Reply-To: <477D8067.8050802@Sun.COM> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> Message-ID: <304E9E55F6A4BE4B910C2437D4D1B4960908DE7BC1@MERCMBX14.na.sas.com> "Error occurred during initialization of VM Could not reserve enough space for object heap" - on a 4 GB RAM 32 bit Windows box running a 32 bit VM with all -Xmx over 768m, I see the previous message. The obvious causes: not enough contiguous memory to accommodate the heap. Fragmentation issues? We rebooted - no other stuff running - what other causes are there to produce such a VM start up issue? thanks keith Keith R Holdaway Java Development Technologies SAS The Power to Know Carpe Diem From Jon.Masamitsu at Sun.COM Tue Apr 15 15:01:40 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Tue, 15 Apr 2008 15:01:40 -0700 Subject: Negative durations? In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B4960908DE7BC1@MERCMBX14.na.sas.com> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B4960908DE7BC1@MERCMBX14.na.sas.com> Message-ID: <480525C4.8000403@Sun.COM> Keith, Which release are you using? Keith Holdaway wrote: > "Error occurred during initialization of VM Could not reserve enough space for object heap" - on a 4 GB RAM 32 bit Windows box running a 32 bit VM with all -Xmx over 768m, I see the previous message. The obvious causes: not enough contiguous memory to accommodate the heap. Fragmentation issues? We rebooted - no other stuff running - what other causes are there to produce such a VM start up issue? > > thanks > > keith > > Keith R Holdaway > Java Development Technologies > > SAS The Power to Know > > Carpe Diem > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Y.S.Ramakrishna at Sun.COM Tue Apr 15 15:26:34 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Tue, 15 Apr 2008 15:26:34 -0700 Subject: Negative durations? In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B4960908DE7BC1@MERCMBX14.na.sas.com> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B4960908DE7BC1@MERCMBX14.na.sas.com> Message-ID: > "Error occurred during initialization of VM Could not reserve enough > space for object heap" - on a 4 GB RAM 32 bit Windows box running a 32 > bit VM with all -Xmx over 768m, I see the previous message. The On a Solaris box I might have asked you to truss the run and see the mmap fail and look at the error code (perhaps we should expose that error code back to the user above). I am guessing Windows may have similar functionality (or others on this list might weigh in with suggestions). I might also have asked for -XX:OnError="pmap %p" (although i am not sure it works so early in the life of a JVM), if windows supported something like pmap (it probably does, except I do not know what it is). > obvious causes: not enough contiguous memory to accommodate the heap. Fragmentation issues? Could be. Although you are asking for only (a bit over) 768 M which is very modest (out of a total virtual address space of 4 GB for the 32 bit VM, OK, may be 2 GB only on 32-bit windows, but i am guessing you are running 64-bit windows and so have the entire 4 GB in a 32-bit process.) Are you starting the JVM through the JNI invocation interface from within a native process, or is your application pure Java and you are using the Java command-line launcher? > We rebooted - no other stuff running - what > other causes are there to produce such a VM start up issue? The only other reason which you ruled out via a fresh reboot and checking that nothing else was running and taking up physical memory or swap was that despite the 4 G RAM, something else was running and taking up physical memory and swap. Perhaps someone else on this alias has encountered the problem, is more familiar w/ windows and may be able to offer suggestions... -- ramki From Keith.Holdaway at sas.com Tue Apr 15 15:28:09 2008 From: Keith.Holdaway at sas.com (Keith Holdaway) Date: Tue, 15 Apr 2008 18:28:09 -0400 Subject: Negative durations? In-Reply-To: <480525C4.8000403@Sun.COM> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B4960908DE7BC1@MERCMBX14.na.sas.com>, <480525C4.8000403@Sun.COM> Message-ID: <304E9E55F6A4BE4B910C2437D4D1B49609070F0E14@MERCMBX14.na.sas.com> Apologies for the incorrect Subject line - although ironically, I am seeing some negative times in the verbosegc log. The JDK 5.0 u15 64 bit VM ________________________________________ From: Jon.Masamitsu at Sun.COM [Jon.Masamitsu at Sun.COM] Sent: Tuesday, April 15, 2008 6:01 PM To: Keith Holdaway Cc: Y.S.Ramakrishna at Sun.COM; hotspot-gc-use at openjdk.java.net Subject: Re: Negative durations? Keith, Which release are you using? Keith Holdaway wrote: > "Error occurred during initialization of VM Could not reserve enough space for object heap" - on a 4 GB RAM 32 bit Windows box running a 32 bit VM with all -Xmx over 768m, I see the previous message. The obvious causes: not enough contiguous memory to accommodate the heap. Fragmentation issues? We rebooted - no other stuff running - what other causes are there to produce such a VM start up issue? > > thanks > > keith > > Keith R Holdaway > Java Development Technologies > > SAS The Power to Know > > Carpe Diem > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Keith.Holdaway at sas.com Tue Apr 15 15:29:27 2008 From: Keith.Holdaway at sas.com (Keith Holdaway) Date: Tue, 15 Apr 2008 18:29:27 -0400 Subject: Negative durations? In-Reply-To: <480525C4.8000403@Sun.COM> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B4960908DE7BC1@MERCMBX14.na.sas.com>, <480525C4.8000403@Sun.COM> Message-ID: <304E9E55F6A4BE4B910C2437D4D1B49609070F0E15@MERCMBX14.na.sas.com> Sorry - not 64 bit - that's another issue - JDK 5.0 u14 on the VM start up issues. ________________________________________ From: Jon.Masamitsu at Sun.COM [Jon.Masamitsu at Sun.COM] Sent: Tuesday, April 15, 2008 6:01 PM To: Keith Holdaway Cc: Y.S.Ramakrishna at Sun.COM; hotspot-gc-use at openjdk.java.net Subject: Re: Negative durations? Keith, Which release are you using? Keith Holdaway wrote: > "Error occurred during initialization of VM Could not reserve enough space for object heap" - on a 4 GB RAM 32 bit Windows box running a 32 bit VM with all -Xmx over 768m, I see the previous message. The obvious causes: not enough contiguous memory to accommodate the heap. Fragmentation issues? We rebooted - no other stuff running - what other causes are there to produce such a VM start up issue? > > thanks > > keith > > Keith R Holdaway > Java Development Technologies > > SAS The Power to Know > > Carpe Diem > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Chris.Phillips at Sun.COM Tue Apr 15 16:13:31 2008 From: Chris.Phillips at Sun.COM (Chris Phillips) Date: Tue, 15 Apr 2008 19:13:31 -0400 Subject: Negative durations? In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B49609070F0E15@MERCMBX14.na.sas.com> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B4960908DE7BC1@MERCMBX14.na.sas.com> <480525C4.8000403@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49609070F0E15@MERCMBX14.na.sas.com> Message-ID: <4805369B.7040402@Sun.Com> Hi Could it perhaps be a dll relocation issue... that is fracturing the contiguous space needed for the larger Java heap? Chris PS I've seen similar issues when 3rd party dlls that were pre-relocated got loaded before the jvm starts.. . Keith Holdaway wrote: > Sorry - not 64 bit - that's another issue - > > JDK 5.0 u14 on the VM start up issues. > ________________________________________ > From: Jon.Masamitsu at Sun.COM [Jon.Masamitsu at Sun.COM] > Sent: Tuesday, April 15, 2008 6:01 PM > To: Keith Holdaway > Cc: Y.S.Ramakrishna at Sun.COM; hotspot-gc-use at openjdk.java.net > Subject: Re: Negative durations? > > Keith, > > Which release are you using? > > > > Keith Holdaway wrote: >> "Error occurred during initialization of VM Could not reserve enough space for object heap" - on a 4 GB RAM 32 bit Windows box running a 32 bit VM with all -Xmx over 768m, I see the previous message. The obvious causes: not enough contiguous memory to accommodate the heap. Fragmentation issues? We rebooted - no other stuff running - what other causes are there to produce such a VM start up issue? >> >> thanks >> >> keith >> >> Keith R Holdaway >> Java Development Technologies >> >> SAS The Power to Know >> >> Carpe Diem >> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -- -- Woda: "write once, debug anywhere" Hong Zhang http://thehenrys.ca | Chris Phillips - Sun Java Sustaining JVM Engineer, | | mailto:Chris.Phillips at Sun.Com (781)442-0046/x20046 | | http://dpweb.sfbay/~chrisphi page:one-877-two six three-2117 | "EPIC stands for Expects Perfectly Intuitive Compilers" P. Bannon http://www.hazmatmodine.com NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. "blah blah blah - Ginger!" -- From Chris.Phillips at Sun.COM Tue Apr 15 16:28:21 2008 From: Chris.Phillips at Sun.COM (Chris Phillips) Date: Tue, 15 Apr 2008 19:28:21 -0400 Subject: Negative durations? In-Reply-To: <4805369B.7040402@Sun.Com> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B4960908DE7BC1@MERCMBX14.na.sas.com> <480525C4.8000403@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49609070F0E15@MERCMBX14.na.sas.com> <4805369B.7040402@Sun.Com> Message-ID: <48053A15.2020804@Sun.Com> Hi Sorry to reply to self, but since I'm not a windows hombre "pre-relocated" I think equals "rebase" in windowseze. Also there is a utility (used to be sysinternals.com, now some where at Microsoft, listdll(s?).exe that will show info about this. Chris Chris Phillips wrote: > Hi > > Could it perhaps be a dll relocation issue... that is fracturing the > contiguous space needed for the larger Java heap? > > Chris > PS > I've seen similar issues when 3rd party dlls that were pre-relocated > got loaded before the jvm starts.. > > . > Keith Holdaway wrote: >> Sorry - not 64 bit - that's another issue - >> >> JDK 5.0 u14 on the VM start up issues. >> ________________________________________ >> From: Jon.Masamitsu at Sun.COM [Jon.Masamitsu at Sun.COM] >> Sent: Tuesday, April 15, 2008 6:01 PM >> To: Keith Holdaway >> Cc: Y.S.Ramakrishna at Sun.COM; hotspot-gc-use at openjdk.java.net >> Subject: Re: Negative durations? >> >> Keith, >> >> Which release are you using? >> >> >> >> Keith Holdaway wrote: >>> "Error occurred during initialization of VM Could not reserve enough >>> space for object heap" - on a 4 GB RAM 32 bit Windows box running a >>> 32 bit VM with all -Xmx over 768m, I see the previous message. The >>> obvious causes: not enough contiguous memory to accommodate the heap. >>> Fragmentation issues? We rebooted - no other stuff running - what >>> other causes are there to produce such a VM start up issue? >>> >>> thanks >>> >>> keith >>> >>> Keith R Holdaway >>> Java Development Technologies >>> >>> SAS The Power to Know >>> >>> Carpe Diem >>> >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -- -- Woda: "write once, debug anywhere" Hong Zhang http://thehenrys.ca | Chris Phillips - Sun Java Sustaining JVM Engineer, | | mailto:Chris.Phillips at Sun.Com (781)442-0046/x20046 | | http://dpweb.sfbay/~chrisphi page:one-877-two six three-2117 | "EPIC stands for Expects Perfectly Intuitive Compilers" P. Bannon http://www.hazmatmodine.com NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. "blah blah blah - Ginger!" -- From doug.jones at EDS.COM Thu Apr 17 20:01:35 2008 From: doug.jones at EDS.COM (Jones, Doug H) Date: Fri, 18 Apr 2008 15:01:35 +1200 Subject: CMS GC tuning under JVM 5.0 Message-ID: <027FCB5D4C65CC4CA714042A4EE8CC6B041974C3@nzprm231.apac.corp.eds.com> Hi, We have considerable experience with JVM tuning of our SunOne appservers, utilizing CMS GC and adjusting NewSize/heapsize values etc under JVM version 1.4.2 to get some very low GC pause times. We are currently moving to manage an application running under JVM 5.0. This is still in Test, but our initial GC monitoring is raising some questions. The application has some normal activity but hourly it needs to do some significant background processing. This drives memory use much higher than normal. So what we see is ParNew's going from approximately one every 5 minutes to just a second or two apart. That is of course no problem in itself. But what we are also see is an occasional concurrent-mode failure, followed by a relatively long single-thread STW collection. While we haven't exactly done the correlation we're assuming that the hourly burst in activity has coincided with the tenured area being close to full (so the CMS GC has not been able to complete before free space available has become less than NewSize). The two examples below are exactly 19 hours apart. Example 1: 31123.434: [GC 31123.434: [ParNew: 24448K->0K(24512K), 0.0294222 secs] 292487K->269028K(327616K), 0.0296926 secs] 31468.449: [GC 31468.449: [ParNew: 24448K->0K(24512K), 0.0228006 secs] 293476K->269851K(327616K), 0.0230994 secs] 31678.918: [GC 31678.919: [ParNew: 24447K->0K(24512K), 0.0950828 secs] 294299K->278163K(327616K), 0.0954078 secs] 31679.235: [GC 31679.235: [ParNew: 24391K->0K(24512K), 0.2853518 secs] 302554K->298349K(327616K), 0.2856694 secs] 31679.536: [GC [1 CMS-initial-mark: 298349K(303104K)] 298442K(327616K), 0.0033056 secs] 31679.540: [CMS-concurrent-mark-start] 31680.017: [CMS-concurrent-mark: 0.477/0.477 secs] 31680.017: [CMS-concurrent-preclean-start] 31680.023: [CMS-concurrent-preclean: 0.006/0.006 secs] 31680.023: [CMS-concurrent-abortable-preclean-start] 31768.429: [GC 31768.429: [ParNew: 24448K->24448K(24512K), 0.0000510 secs]31768.430: [CMS31768.430: [CMS-concurrent-abortable-preclean: 5.410/88.406 secs] (concurrent mode failure): 298349K->35861K(303104K), 0.9340408 secs] 322797K->35861K(327616K), 0.9345904 secs] Example 2: 100064.686: [GC 100064.686: [ParNew: 24448K->0K(24512K), 0.0155892 secs] 293020K->268870K(327616K), 0.0160228 secs] 100079.843: [GC 100079.843: [ParNew: 24382K->0K(24512K), 0.0390096 secs] 293253K->291786K(327616K), 0.0393622 secs] 100079.883: [GC [1 CMS-initial-mark: 291786K(303104K)] 291881K(327616K), 0.0028736 secs] 100079.887: [CMS-concurrent-mark-start] 100080.381: [CMS-concurrent-mark: 0.494/0.494 secs] 100080.381: [CMS-concurrent-preclean-start] 100080.390: [CMS-concurrent-preclean: 0.009/0.009 secs] 100080.390: [CMS-concurrent-abortable-preclean-start] 100259.694: [GC 100259.694: [ParNew: 24448K->24448K(24512K), 0.0000456 secs]100259.694: [CMS100259.695: [CMS-concurrent-abortable-preclean: 10.649/179.305 sec s] (concurrent mode failure): 291786K->43120K(303104K), 1.0073652 secs] 316234K->43120K(327616K), 1.0078356 secs] This is not a problem to us in Test, but if we extrapolate to Production with a proposed heap size of 1.5GB, this may become more of an issue. Under JVM 1.4.2 we can circumvent the default Collector behaviour by adding the flag "-XX:+UseCMSInitiatingOccupancyOnly=true" to ensure CMS GC's occur at approximately the CMSInitiatingOccupancyFraction percent full. But this flag does not appear to be available under JVM 5.0. So we have two questions: 1) Is there an equivalent option for JVM 5.0 which will force CMS Collections to occur with a reasonably large amount of free space left in tenured (ie relative to NewSize), and 2) Could you interpret the pair of times on the CMS-concurrent-abortable-preclean step, in particular the large time (88 and 179secs in the above examples). Thanks, Doug. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080418/962a421f/attachment.html From Thomas.Viessmann at Sun.COM Fri Apr 18 00:24:03 2008 From: Thomas.Viessmann at Sun.COM (Thomas Viessmann) Date: Fri, 18 Apr 2008 09:24:03 +0200 Subject: CMS GC tuning under JVM 5.0 In-Reply-To: <027FCB5D4C65CC4CA714042A4EE8CC6B041974C3@nzprm231.apac.corp.eds.com> References: <027FCB5D4C65CC4CA714042A4EE8CC6B041974C3@nzprm231.apac.corp.eds.com> Message-ID: <48084C93.90606@sun.com> Hi Doug, on the flag -XX:+UseCMSInitiatingOccupancyOnly=true, you simply got the syntax wrong. This should be either -XX:+UseCMSInitiatingOccupancyOnly //True or -XX:-UseCMSInitiatingOccupancyOnly //False To my surprise, the wrong syntax worked in 1.4.2. This obviously got fixed in 5.0 and above: $ /usr/jdk/java1.4.2/bin/java -XX:+UseCMSInitiatingOccupancyOnly=true -version java version "1.4.2_17" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_17-b06) Java HotSpot(TM) Client VM (build 1.4.2_17-b06, mixed mode) $ /usr/jdk/java1.4.2/bin/java -XX:+UseCMSInitiatingOccupancyOnly -version java version "1.4.2_17" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_17-b06) Java HotSpot(TM) Client VM (build 1.4.2_17-b06, mixed mode) $ /usr/jdk/java5/bin/java -XX:+UseCMSInitiatingOccupancyOnly=true -version Unrecognized VM option '+UseCMSInitiatingOccupancyOnly=true' Could not create the Java virtual machine. $ /usr/jdk/java5/bin/java -XX:+UseCMSInitiatingOccupancyOnly -version java version "1.5.0_15" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04) Java HotSpot(TM) Server VM (build 1.5.0_15-b04, mixed mode) $ /usr/jdk/java6/bin/java -XX:+UseCMSInitiatingOccupancyOnly=true -version Unrecognized VM option '+UseCMSInitiatingOccupancyOnly=true' Could not create the Java virtual machine. $ /usr/jdk/java6/bin/java -XX:+UseCMSInitiatingOccupancyOnly -version java version "1.6.0_05" Java(TM) SE Runtime Environment (build 1.6.0_05-b13) Java HotSpot(TM) Server VM (build 10.0-b19, mixed mode) Regarding the long preclean times: These are concurrent times and not Stop-the-world. Sorry I do not know their exact meaning. I'm sure someone else will answer this one. Thomas Jones, Doug H wrote: > > Hi, > > We have considerable experience with JVM tuning of our SunOne > appservers, utilizing CMS GC and adjusting NewSize/heapsize values etc > under JVM version 1.4.2 to get some very low GC pause times. > > We are currently moving to manage an application running under JVM > 5.0. This is still in Test, but our initial GC monitoring is raising > some questions. > > The application has some normal activity but hourly it needs to do > some significant background processing. This drives memory use much > higher than normal. So what we see is ParNew's going from > approximately one every 5 minutes to just a second or two apart. That > is of course no problem in itself. But what we are also see is an > occasional concurrent-mode failure, followed by a relatively long > single-thread STW collection. While we haven't exactly done the > correlation we're assuming that the hourly burst in activity has > coincided with the tenured area being close to full (so the CMS GC has > not been able to complete before free space available has become less > than NewSize). The two examples below are exactly 19 hours apart. > > Example 1: > > 31123.434: [GC 31123.434: [ParNew: 24448K->0K(24512K), 0.0294222 secs] > 292487K->269028K(327616K), 0.0296926 secs] > 31468.449: [GC 31468.449: [ParNew: 24448K->0K(24512K), 0.0228006 secs] > 293476K->269851K(327616K), 0.0230994 secs] > 31678.918: [GC 31678.919: [ParNew: 24447K->0K(24512K), 0.0950828 secs] > 294299K->278163K(327616K), 0.0954078 secs] > 31679.235: [GC 31679.235: [ParNew: 24391K->0K(24512K), 0.2853518 secs] > 302554K->298349K(327616K), 0.2856694 secs] > 31679.536: [GC [1 CMS-initial-mark: 298349K(303104K)] > 298442K(327616K), 0.0033056 secs] > 31679.540: [CMS-concurrent-mark-start] > 31680.017: [CMS-concurrent-mark: 0.477/0.477 secs] > 31680.017: [CMS-concurrent-preclean-start] > 31680.023: [CMS-concurrent-preclean: 0.006/0.006 secs] > 31680.023: [CMS-concurrent-abortable-preclean-start] > 31768.429: [GC 31768.429: [ParNew: 24448K->24448K(24512K), 0.0000510 > secs]31768.430: [CMS31768.430: [CMS-concurrent-abortable-preclean: > 5.410/88.406 secs] > > (concurrent mode failure): 298349K->35861K(303104K), 0.9340408 secs] > 322797K->35861K(327616K), 0.9345904 secs] > > > Example 2: > > 100064.686: [GC 100064.686: [ParNew: 24448K->0K(24512K), 0.0155892 > secs] 293020K->268870K(327616K), 0.0160228 secs] > 100079.843: [GC 100079.843: [ParNew: 24382K->0K(24512K), 0.0390096 > secs] 293253K->291786K(327616K), 0.0393622 secs] > 100079.883: [GC [1 CMS-initial-mark: 291786K(303104K)] > 291881K(327616K), 0.0028736 secs] > 100079.887: [CMS-concurrent-mark-start] > 100080.381: [CMS-concurrent-mark: 0.494/0.494 secs] > 100080.381: [CMS-concurrent-preclean-start] > 100080.390: [CMS-concurrent-preclean: 0.009/0.009 secs] > 100080.390: [CMS-concurrent-abortable-preclean-start] > 100259.694: [GC 100259.694: [ParNew: 24448K->24448K(24512K), 0.0000456 > secs]100259.694: [CMS100259.695: [CMS-concurrent-abortable-preclean: > 10.649/179.305 sec > > s] > (concurrent mode failure): 291786K->43120K(303104K), 1.0073652 secs] > 316234K->43120K(327616K), 1.0078356 secs] > > This is not a problem to us in Test, but if we extrapolate to > Production with a proposed heap size of 1.5GB, this may become more of > an issue. > > Under JVM 1.4.2 we can circumvent the default Collector behaviour by > adding the flag "-XX:+UseCMSInitiatingOccupancyOnly=true" to ensure > CMS GC's occur at approximately the CMSInitiatingOccupancyFraction > percent full. But this flag does not appear to be available under JVM 5.0. > > So we have two questions: > > 1) Is there an equivalent option for JVM 5.0 which will force CMS > Collections to occur with a reasonably large amount of free space left > in tenured (ie relative to NewSize), and > > > 2) Could you interpret the pair of times on the > CMS-concurrent-abortable-preclean step, in particular the large time > (88 and 179secs in the above examples). > > > Thanks, > Doug. > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -- --- mit freundlichen Gruessen / with kind regards Thomas Viessmann Global Sales and Services - Software Support Engineering Sun Microsystems GmbH Phone: +49 (0)89 46008 2365 / x62365 Sonnenallee 1 Mobile: +49 (0)174 300 5467 D-85551 Kirchheim-Heimstetten Pager: Thomas.Viessmann at sun.itechtool.com Germany/Deutschland mailto: Thomas.Viessmann at sun.com http://www.sun.de Amtsgericht Muenchen: HRB 161028 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer Vorsitzender des Aufsichtsrates: Martin Haering From Jon.Masamitsu at Sun.COM Fri Apr 18 08:08:04 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Fri, 18 Apr 2008 08:08:04 -0700 Subject: CMS GC tuning under JVM 5.0 In-Reply-To: <027FCB5D4C65CC4CA714042A4EE8CC6B041974C3@nzprm231.apac.corp.eds.com> References: <027FCB5D4C65CC4CA714042A4EE8CC6B041974C3@nzprm231.apac.corp.eds.com> Message-ID: <4808B954.4060603@sun.com> Jones, Doug H wrote On 04/17/08 20:01,: > > ... > > > 2) Could you interpret the pair of times on the > CMS-concurrent-abortable-preclean step, in particular the large time > (88 and 179secs in the above examples). > > The times are wall clock times. Part of the marking of live objects are done while the application is running and changing objects. We monitor the objects that are changed and after the concurrent marking we go over those objects to check for changes to the liveness of objects. This latter check is referred to as precleaning. It's done concurrently in two parts. The precleaing phase runs for a period which depends on factors such as the number of objects that we find that need precleaning and whether the applications threads are changing objects faster then we are precleaning them. After the precleaning phase there is the abortable precleaning phase. It does the same type of precleaning but it only runs until we've decided that we want to start the remark phase (which is the second CMS stop-the-world pause). We schedule the remark phase to be between young generation (ParNew in your case) collections. The abortable precleaning is done between the precleaning and the remark phases to do additional precleaning without delaying the remark phase (i.e., we can abort it in order to start the remark phase). Hope that helps. Now I notice that you are getting concurrent mode failures. What release of jdk 5 are you using? The length of the abortable precleaning phase appears to be too long. I think there was a bug related to abortable precleaning and infrequent young generation collections. You can turn abortable precleaning off with -XX:CMSMaxAbortablePrecleanTime=0. From Y.S.Ramakrishna at Sun.COM Fri Apr 18 19:15:10 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 18 Apr 2008 19:15:10 -0700 Subject: CMS GC tuning under JVM 5.0 In-Reply-To: <48084C93.90606@sun.com> References: <027FCB5D4C65CC4CA714042A4EE8CC6B041974C3@nzprm231.apac.corp.eds.com> <48084C93.90606@sun.com> Message-ID: > > 31680.017: [CMS-concurrent-mark: 0.477/0.477 secs] > > 31680.023: [CMS-concurrent-preclean: 0.006/0.006 secs] > > secs]31768.430: [CMS31768.430: [CMS-concurrent-abortable-preclean: > > 5.410/88.406 secs] > > 100080.381: [CMS-concurrent-mark: 0.494/0.494 secs] > > 100080.390: [CMS-concurrent-preclean: 0.009/0.009 secs] > > secs]100259.694: [CMS100259.695: [CMS-concurrent-abortable-preclean: > > > 10.649/179.305 sec > > > > s] Just to add a bit to what Jon said, the notation [CMS-concurrent-xxx: yyy/zzz secs] indicates that the CMS concurrent xxx-phase took roughly zzz secs of wall-clock elapsed time, of which yyy secs of accumulated elapsed time was spent by the CMS thread doing that work; the remainder of the time, zzz - yyy secs, the thread was either sleeping or was stalled for a lock. Note that yyy secs is a rough upper bound of the lwp-virtual time, but the thread need not necessarily have been on-proc all of that time. So in particular, in the last display, of the 179 secs of elapsed time the CMS thread was actually sleeping for 169 secs. Jon's conjecture of the bug about CMSMaxAbortablePrecleanTime seems to be the most likely explanation. (There was a confusion with the interpretation of the units, treating an intended ms spec as a seconds spec, the documented workaround is to set the value to a value in seconds not exceeding about 5: -XX:CMSMaxAbortablePrecleanTime=5 -- or to zero to just elide that phase.) I believe this has been fixed in some later version of JDK 5uXX, but am unable to check the bug id at the moment. -- ramki From Y.S.Ramakrishna at Sun.COM Fri Apr 18 19:24:09 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Fri, 18 Apr 2008 19:24:09 -0700 Subject: CMS GC tuning under JVM 5.0 In-Reply-To: References: <027FCB5D4C65CC4CA714042A4EE8CC6B041974C3@nzprm231.apac.corp.eds.com> <48084C93.90606@sun.com> Message-ID: Here you go:- http://bugs.sun.com/view_bug.do?bug_id=6538910 It shows up as fixed in 5u16b01 according to the bug report. Also documented is the workaround. If you have issues with long remark pauses when you use the workaround, refer to CR http://bugs.sun.com/view_bug.do?bug_id=6572569 -- ramki ----- Original Message ----- From: Y Srinivas Ramakrishna Date: Friday, April 18, 2008 7:15 pm Subject: Re: CMS GC tuning under JVM 5.0 To: Thomas Viessmann Cc: "Jones, Doug H" , hotspot-gc-use at openjdk.java.net > > > 31680.017: [CMS-concurrent-mark: 0.477/0.477 secs] > > > > 31680.023: [CMS-concurrent-preclean: 0.006/0.006 secs] > > > > secs]31768.430: [CMS31768.430: [CMS-concurrent-abortable-preclean: > > > > 5.410/88.406 secs] > > > > 100080.381: [CMS-concurrent-mark: 0.494/0.494 secs] > > > > 100080.390: [CMS-concurrent-preclean: 0.009/0.009 secs] > > > > secs]100259.694: [CMS100259.695: > [CMS-concurrent-abortable-preclean: > > > > > 10.649/179.305 sec > > > > > > s] > > Just to add a bit to what Jon said, the notation [CMS-concurrent-xxx: > yyy/zzz secs] > indicates that the CMS concurrent xxx-phase took roughly zzz secs of wall-clock > elapsed time, of which yyy secs of accumulated elapsed time was spent > by the > CMS thread doing that work; the remainder of the time, zzz - yyy secs, > the > thread was either sleeping or was stalled for a lock. Note that yyy > secs is > a rough upper bound of the lwp-virtual time, but the thread need not necessarily > have been on-proc all of that time. > > So in particular, in the last display, of the 179 secs of elapsed time > the CMS thread > was actually sleeping for 169 secs. Jon's conjecture of the bug about > CMSMaxAbortablePrecleanTime > seems to be the most likely explanation. (There was a confusion with > the interpretation > of the units, treating an intended ms spec as a seconds spec, the documented > workaround is to set the value to a value in seconds not exceeding > about 5: > -XX:CMSMaxAbortablePrecleanTime=5 -- or to zero to just elide that phase.) > > I believe this has been fixed in some later version of JDK 5uXX, but > am unable to > check the bug id at the moment. > > -- ramki > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From doug.jones at eds.com Sat Apr 19 15:46:12 2008 From: doug.jones at eds.com (Jones, Doug H) Date: Sun, 20 Apr 2008 10:46:12 +1200 Subject: CMS GC tuning under JVM 5.0 References: <027FCB5D4C65CC4CA714042A4EE8CC6B041974C3@nzprm231.apac.corp.eds.com><48084C93.90606@sun.com> Message-ID: <027FCB5D4C65CC4CA714042A4EE8CC6B026739E1@nzprm231.apac.corp.eds.com> Hi Ramki, and Jon, Thank you for the background information - indeed the bug would appear to explain exactly what we are seeing in terms of long times for the concurrent-abortable-preclean phase. The application is using 5u08 so it would be present. So just to summarize for anybody else's general information if they happen to come across an application which has similar characteristics (ie has normal steady-state processing, but also has periodic spurts of very much greater activity, which the Collector is not easily able to adapt or respond to), under a JDK 5.0 version prior to u16, what we will try is: 1) Set CMSInitiatingOccupancyFraction to a value so that the tenured area contains at least twice NewSize when it reaches this percent full, AND set +UseCMSInitiatingOccupancyOnly so that the Collector honours this value (ie disabling its default 'JIT plus a bit to be safe' behaviour) 2) Specifically set CMSMaxAbortablePrecleanTime=5 so the concurrent-abortable-preclean phase max time is correctly interpreted under this version as 5 seconds (the 6.0 default value), and 3) As suggested in the second bug mentioned by Ramki, we may also set +CMSScavengeBeforeRemak. This flag is I believe not on by default in any current JDK version, but in our case with a relatively small NewSize, a ParNew collection takes max something like 0.02sec (and that's when at its full 24Mb) so should add virtually nothing to the time the app is stopped. Actually from looking at other CMS Collections in the same log, just 1) and 3) on their own may fix the problem because we see several examples where the CMS GC has kicked in with eden containing less than 1Mb and the abortable-preclean phase is omitted altogether. However this is specific to the characteristics of our application. I have no doubt that there are other situations where setting CMSScavengeBeforeRemak is not desirable. Again thanks everyone for your assistance, Doug. ________________________________ From: hotspot-gc-use-bounces at openjdk.java.net on behalf of Y Srinivas Ramakrishna Sent: Sat 19/04/2008 2:24 p.m. To: Y Srinivas Ramakrishna Cc: Jones, Doug H; hotspot-gc-use at openjdk.java.net; Thomas Viessmann Subject: Re: CMS GC tuning under JVM 5.0 Here you go:- http://bugs.sun.com/view_bug.do?bug_id=6538910 It shows up as fixed in 5u16b01 according to the bug report. Also documented is the workaround. If you have issues with long remark pauses when you use the workaround, refer to CR http://bugs.sun.com/view_bug.do?bug_id=6572569 -- ramki ----- Original Message ----- From: Y Srinivas Ramakrishna Date: Friday, April 18, 2008 7:15 pm Subject: Re: CMS GC tuning under JVM 5.0 To: Thomas Viessmann Cc: "Jones, Doug H" , hotspot-gc-use at openjdk.java.net > > > 31680.017: [CMS-concurrent-mark: 0.477/0.477 secs] > > > > 31680.023: [CMS-concurrent-preclean: 0.006/0.006 secs] > > > > secs]31768.430: [CMS31768.430: [CMS-concurrent-abortable-preclean: > > > > 5.410/88.406 secs] > > > > 100080.381: [CMS-concurrent-mark: 0.494/0.494 secs] > > > > 100080.390: [CMS-concurrent-preclean: 0.009/0.009 secs] > > > > secs]100259.694: [CMS100259.695: > [CMS-concurrent-abortable-preclean: > > > > > 10.649/179.305 sec > > > > > > s] > > Just to add a bit to what Jon said, the notation [CMS-concurrent-xxx: > yyy/zzz secs] > indicates that the CMS concurrent xxx-phase took roughly zzz secs of wall-clock > elapsed time, of which yyy secs of accumulated elapsed time was spent > by the > CMS thread doing that work; the remainder of the time, zzz - yyy secs, > the > thread was either sleeping or was stalled for a lock. Note that yyy > secs is > a rough upper bound of the lwp-virtual time, but the thread need not necessarily > have been on-proc all of that time. > > So in particular, in the last display, of the 179 secs of elapsed time > the CMS thread > was actually sleeping for 169 secs. Jon's conjecture of the bug about > CMSMaxAbortablePrecleanTime > seems to be the most likely explanation. (There was a confusion with > the interpretation > of the units, treating an intended ms spec as a seconds spec, the documented > workaround is to set the value to a value in seconds not exceeding > about 5: > -XX:CMSMaxAbortablePrecleanTime=5 -- or to zero to just elide that phase.) > > I believe this has been fixed in some later version of JDK 5uXX, but > am unable to > check the bug id at the moment. > > -- ramki > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20080420/8f247e0c/attachment.html From Y.S.Ramakrishna at Sun.COM Mon Apr 21 16:29:59 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Mon, 21 Apr 2008 16:29:59 -0700 Subject: CMS GC tuning under JVM 5.0 In-Reply-To: <027FCB5D4C65CC4CA714042A4EE8CC6B026739E1@nzprm231.apac.corp.eds.com> References: <027FCB5D4C65CC4CA714042A4EE8CC6B041974C3@nzprm231.apac.corp.eds.com> <48084C93.90606@sun.com> <027FCB5D4C65CC4CA714042A4EE8CC6B026739E1@nzprm231.apac.corp.eds.com> Message-ID: Hi Doug -- > 3) As suggested in the second bug mentioned by Ramki, we may also set > +CMSScavengeBeforeRemark. This flag is I believe not on by default in > any current JDK version, but in our case with a relatively small > NewSize, a ParNew collection takes max something like 0.02sec (and > that's when at its full 24Mb) so should add virtually nothing to the > time the app is stopped. Unfortunately, it turns out that because of an engineering process error the suggested fix of turning CMSScavengeBeforeRemark into a "product" flag was not accomplished as part of the CR I mentioned, a fact that I discovered a few hours ago when assisting a customer running on 5uXX. This is being corrected in the next update of 5uXX and the bug report will be fixed to reflect that unfortunate and inadvertent omission. (Note that this problem is specific to 5uXX; with 6uXX and 7, there is no problem -- the flag is indeed available for setting in 6uXX and 7.) -- ramki > > Actually from looking at other CMS Collections in the same log, just > 1) and 3) on their own may fix the problem because we see several > examples where the CMS GC has kicked in with eden containing less than > 1Mb and the abortable-preclean phase is omitted altogether. However > this is specific to the characteristics of our application. I have no > doubt that there are other situations where setting > CMSScavengeBeforeRemark is not desirable. From Keith.Holdaway at sas.com Sat Apr 26 07:28:08 2008 From: Keith.Holdaway at sas.com (Keith Holdaway) Date: Sat, 26 Apr 2008 10:28:08 -0400 Subject: Weak References In-Reply-To: <477D8067.8050802@Sun.COM> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> Message-ID: <304E9E55F6A4BE4B910C2437D4D1B4960A622D2BE7@MERCMBX14.na.sas.com> If we are seeing a huge build up of weak references: sun/rmi/transport/WeakRef => 1654973 1644293 10680 0 0 Total 794333 783888 10445 0 0 => com/sas/metadata/remote/MdObjectListImpl 194966 194966 0 0 0 => com/sas/metadata/remote/impl/PropertyImpl 192165 192165 0 0 0 => com/sas/services/information/metadata/OMRProperty When does the GC algorithm decide to collect? Is there something that can be done programatically to collect earlier? I assume GC will not collect until the weak references are "dead", i.e. the referents are available for GC since no strong refs are pointing at the referent? Any guidance appreciated. keith Keith R Holdaway Java Development Technologies SAS The Power to Know Carpe Diem From Keith.Holdaway at sas.com Sat Apr 26 07:45:09 2008 From: Keith.Holdaway at sas.com (Keith Holdaway) Date: Sat, 26 Apr 2008 10:45:09 -0400 Subject: java.lang.OutOfMemoryError: nativeGetNewTLA References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> Message-ID: <304E9E55F6A4BE4B910C2437D4D1B4960A622D2BEA@MERCMBX14.na.sas.com> Any ideas what this refers to? java.lang.OutOfMemoryError: nativeGetNewTLA at org.jgroups.protocols.TP.bufferToMessage(TP.java:972) at org.jgroups.protocols.TP.handleIncomingPacket(TP.java:829) at org.jgroups.protocols.TP.access$400(TP.java:45) at org.jgroups.protocols.TP$IncomingPacketHandler.run(TP.java:1296) at java.lang.Thread.run(Thread.java:595) 2008-04-26 02:38:13,411 ERROR [org.jgroups.stack.DownHandler] DownHandler (NAKACK) caught exception java.lang.OutOfMemoryError: nativeGetNewTLA at org.jgroups.protocols.pbcast.NAKACK.getDigestHighestDeliveredMsgs(NAKACK.java:935) at org.jgroups.protocols.pbcast.NAKACK.down(NAKACK.java:422) at org.jgroups.stack.DownHandler.run(Protocol.java:121) 2008-04-26 02:38:17,471 WARN [org.jgroups.util.TimeScheduler] exception executing task org.jgroups.protocols.pbcast.STABLE$StableTask at 4130b93 java.lang.OutOfMemoryError: nativeGetNewTLA at org.jgroups.protocols.pbcast.STABLE$StableTask.run(STABLE.java:783) at org.jgroups.util.TimeScheduler$TaskWrapper.run(TimeScheduler.java:204) at java.util.TimerThread.mainLoop(Timer.java:512) at java.util.TimerThread.run(Timer.java:462) 2008-04-26 02:38:20,298 ERROR [org.jgroups.stack.UpHandler] UpHandler (VERIFY_SUSPECT) caught exception java.lang.OutOfMemoryError: nativeGetNewTLA at org.jgroups.util.Queue.addInternal(Queue.java:570) at org.jgroups.util.Queue.add(Queue.java:143) at org.jgroups.stack.Protocol.receiveUpEvent(Protocol.java:474) at org.jgroups.stack.Protocol.passUp(Protocol.java:520) at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:170) at org.jgroups.stack.UpHandler.run(Protocol.java:60) 2008-04-26 02:38:20,298 ERROR [org.jgroups.stack.DownHandler] DownHandler (FD) caught exception java.lang.OutOfMemoryError: nativeGetNewTLA at org.jgroups.util.Queue.addInternal(Queue.java:570) at org.jgroups.util.Queue.add(Queue.java:143) at org.jgroups.stack.Protocol.receiveDownEvent(Protocol.java:503) at org.jgroups.stack.Protocol.passDown(Protocol.java:533) at org.jgroups.protocols.FD.down(FD.java:339) at org.jgroups.stack.DownHandler.run(Protocol.java:121) 2008-04-26 02:38:24,562 ERROR [org.jgroups.stack.DownHandler] DownHandler (UNICAST) caught exception java.lang.OutOfMemoryError: nativeGetNewTLA at org.jgroups.util.Queue.addInternal(Queue.java:570) at org.jgroups.util.Queue.add(Queue.java:143) at org.jgroups.stack.Protocol.receiveDownEvent(Protocol.java:503) at org.jgroups.stack.Protocol.passDown(Protocol.java:533) at org.jgroups.protocols.UNICAST.down(UNICAST.java:391) at org.jgroups.stack.DownHandler.run(Protocol.java:121) 2008-04-26 02:38:25,936 ERROR [org.jgroups.stack.UpHandler] UpHandler (AUTH) caught exception java.lang.OutOfMemoryError: nativeGetNewTLA at org.jgroups.util.Queue.addInternal(Queue.java:570) at org.jgroups.util.Queue.add(Queue.java:143) at org.jgroups.stack.Protocol.receiveUpEvent(Protocol.java:474) at org.jgroups.protocols.pbcast.GMS.receiveUpEvent(GMS.java:788) at org.jgroups.stack.Protocol.passUp(Protocol.java:520) at org.jgroups.protocols.AUTH.up(AUTH.java:143) at org.jgroups.stack.UpHandler.run(Protocol.java:60) 2008-04-26 02:38:35,854 ERROR [org.jgroups.stack.DownHandler] DownHandler (FD) caught exception java.lang.OutOfMemoryError: nativeGetNewTLA at org.jgroups.util.Queue.addInternal(Queue.java:570) at org.jgroups.util.Queue.add(Queue.java:143) at org.jgroups.stack.Protocol.receiveDownEvent(Protocol.java:503) at org.jgroups.stack.Protocol.passDown(Protocol.java:533) at org.jgroups.protocols.FD.down(FD.java:339) at org.jgroups.stack.DownHandler.run(Protocol.java:121) 2008-04-26 02:38:38,618 ERROR [org.jgroups.stack.UpHandler] UpHandler (UNICAST) caught exception java.lang.OutOfMemoryError: nativeGetNewTLA at org.jgroups.util.Queue.addInternal(Queue.java:570) at org.jgroups.util.Queue.add(Queue.java:143) at org.jgroups.stack.Protocol.receiveUpEvent(Protocol.java:474) at org.jgroups.stack.Protocol.passUp(Protocol.java:520) at org.jgroups.protocols.UNICAST.up(UNICAST.java:259) at org.jgroups.stack.UpHandler.run(Protocol.java:60) Running in JBoss. keith Keith R Holdaway Java Development Technologies SAS The Power to Know Carpe Diem -----Original Message----- From: Keith Holdaway Sent: Saturday, April 26, 2008 10:28 AM To: 'Y.S.Ramakrishna at Sun.COM' Cc: hotspot-gc-use at openjdk.java.net Subject: Weak References If we are seeing a huge build up of weak references: sun/rmi/transport/WeakRef => 1654973 1644293 10680 0 0 Total 794333 783888 10445 0 0 => com/sas/metadata/remote/MdObjectListImpl 194966 194966 0 0 0 => com/sas/metadata/remote/impl/PropertyImpl 192165 192165 0 0 0 => com/sas/services/information/metadata/OMRProperty When does the GC algorithm decide to collect? Is there something that can be done programatically to collect earlier? I assume GC will not collect until the weak references are "dead", i.e. the referents are available for GC since no strong refs are pointing at the referent? Any guidance appreciated. keith Keith R Holdaway Java Development Technologies SAS The Power to Know Carpe Diem From Y.S.Ramakrishna at Sun.COM Sun Apr 27 22:21:36 2008 From: Y.S.Ramakrishna at Sun.COM (Y Srinivas Ramakrishna) Date: Sun, 27 Apr 2008 22:21:36 -0700 Subject: Weak References In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B4960A622D2BE7@MERCMBX14.na.sas.com> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B4960A622D2BE7@MERCMBX14.na.sas.com> Message-ID: Hi Keith -- > If we are seeing a huge build up of weak references: > > sun/rmi/transport/WeakRef => > 1654973 1644293 10680 0 0 Total > 794333 783888 10445 0 0 => com/sas/metadata/remote/MdObjectListImpl > 194966 194966 0 0 0 => com/sas/metadata/remote/impl/PropertyImpl > 192165 192165 0 0 0 => com/sas/services/information/metadata/OMRProperty > > When does the GC algorithm decide to collect? Is there something that > can be done programatically to collect earlier? When a GC finds that the referent of a WeakReference is not strongly reachable, then the WeakReference is cleared, the referent is collected and the WeakReference's queue (if any) is notified. The WeakReference itself is collected only if and when it becomes strongly unreachable. > > I assume GC will not collect until the weak references are "dead", > i.e. the referents are available for GC since no strong refs are > pointing at the referent? > > Any guidance appreciated. But, I do not know anything about how exactly RMI makes use of WeakReferences. -- ramki From heiko.wagner at apis.de Mon Apr 28 04:27:17 2008 From: heiko.wagner at apis.de (Heiko Wagner) Date: Mon, 28 Apr 2008 13:27:17 +0200 Subject: VM memory management on Win32 with G1 Message-ID: <005001c8a922$d16f1780$c201a8c0@HeikoXP> Hi! I am evangelizing my company to use Java. I started using Java, in combination with our legacy system, via the invocation api and call methods using JNI. The platform is 32bit Windows. One problem is that the legacy software allocates pretty much of the address space using a VirtualAlloc api call, so there is no large contiguous space left for Java. As far as I know the 2nd edition of the Java VM spec removes the need to have the heap in one contiguous segment. As far as my understanding goes the G1 collector should make a non contiguous heap allocation possible. I am right with my assumption? It it possible to enable such a memory layout in Java 7? Regards Heiko From Jon.Masamitsu at Sun.COM Mon Apr 28 06:58:40 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 28 Apr 2008 06:58:40 -0700 Subject: VM memory management on Win32 with G1 In-Reply-To: <005001c8a922$d16f1780$c201a8c0@HeikoXP> References: <005001c8a922$d16f1780$c201a8c0@HeikoXP> Message-ID: <4815D810.30007@Sun.COM> None of the hotspot collectors (including G1) are implemented to use a non-contiguous heap. We're aware of the problem of not being able to allocate a contiguous address space for the Java heap, but are not currently working on that issue for JDK 7. The design of G1 may make it more amenable to a non-contiguous heap in the future, but G1 is a new collector and we have our hands full right now getting it to product quality in its current configuration. Heiko Wagner wrote: > Hi! I am evangelizing my company to use Java. I started using Java, in > combination with our legacy system, via the invocation api and call methods > using JNI. The platform is 32bit Windows. One problem is that the legacy > software allocates pretty much of the address space using a VirtualAlloc api > call, so there is no large contiguous space left for Java. As far as I know > the 2nd edition of the Java VM spec removes the need to have the heap in one > contiguous segment. As far as my understanding goes the G1 collector should > make a non contiguous heap allocation possible. I am right with my > assumption? It it possible to enable such a memory layout in Java 7? > > > Regards > Heiko > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From Jon.Masamitsu at Sun.COM Mon Apr 28 07:45:11 2008 From: Jon.Masamitsu at Sun.COM (Jon Masamitsu) Date: Mon, 28 Apr 2008 07:45:11 -0700 Subject: java.lang.OutOfMemoryError: nativeGetNewTLA In-Reply-To: <304E9E55F6A4BE4B910C2437D4D1B4960A622D2BEA@MERCMBX14.na.sas.com> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com> <477C09ED.9020400@Sun.COM> <22133221-306B-41D3-AE57-155876104354@mugfu.com> <477C1718.70004@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com> <5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com> <477D8067.8050802@Sun.COM> <304E9E55F6A4BE4B910C2437D4D1B4960A622D2BEA@MERCMBX14.na.sas.com> Message-ID: <4815E2F7.60000@Sun.COM> Keith Holdaway wrote: > Any ideas what this refers to? I looked through the hotspot sources and didn't find any references to nativeGetNewTLA. > > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.protocols.TP.bufferToMessage(TP.java:972) > at org.jgroups.protocols.TP.handleIncomingPacket(TP.java:829) > at org.jgroups.protocols.TP.access$400(TP.java:45) > at org.jgroups.protocols.TP$IncomingPacketHandler.run(TP.java:1296) > at java.lang.Thread.run(Thread.java:595) > 2008-04-26 02:38:13,411 ERROR [org.jgroups.stack.DownHandler] DownHandler (NAKACK) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.protocols.pbcast.NAKACK.getDigestHighestDeliveredMsgs(NAKACK.java:935) > at org.jgroups.protocols.pbcast.NAKACK.down(NAKACK.java:422) > at org.jgroups.stack.DownHandler.run(Protocol.java:121) > 2008-04-26 02:38:17,471 WARN [org.jgroups.util.TimeScheduler] exception executing task org.jgroups.protocols.pbcast.STABLE$StableTask at 4130b93 > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.protocols.pbcast.STABLE$StableTask.run(STABLE.java:783) > at org.jgroups.util.TimeScheduler$TaskWrapper.run(TimeScheduler.java:204) > at java.util.TimerThread.mainLoop(Timer.java:512) > at java.util.TimerThread.run(Timer.java:462) > 2008-04-26 02:38:20,298 ERROR [org.jgroups.stack.UpHandler] UpHandler (VERIFY_SUSPECT) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveUpEvent(Protocol.java:474) > at org.jgroups.stack.Protocol.passUp(Protocol.java:520) > at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:170) > at org.jgroups.stack.UpHandler.run(Protocol.java:60) > 2008-04-26 02:38:20,298 ERROR [org.jgroups.stack.DownHandler] DownHandler (FD) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveDownEvent(Protocol.java:503) > at org.jgroups.stack.Protocol.passDown(Protocol.java:533) > at org.jgroups.protocols.FD.down(FD.java:339) > at org.jgroups.stack.DownHandler.run(Protocol.java:121) > 2008-04-26 02:38:24,562 ERROR [org.jgroups.stack.DownHandler] DownHandler (UNICAST) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveDownEvent(Protocol.java:503) > at org.jgroups.stack.Protocol.passDown(Protocol.java:533) > at org.jgroups.protocols.UNICAST.down(UNICAST.java:391) > at org.jgroups.stack.DownHandler.run(Protocol.java:121) > 2008-04-26 02:38:25,936 ERROR [org.jgroups.stack.UpHandler] UpHandler (AUTH) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveUpEvent(Protocol.java:474) > at org.jgroups.protocols.pbcast.GMS.receiveUpEvent(GMS.java:788) > at org.jgroups.stack.Protocol.passUp(Protocol.java:520) > at org.jgroups.protocols.AUTH.up(AUTH.java:143) > at org.jgroups.stack.UpHandler.run(Protocol.java:60) > 2008-04-26 02:38:35,854 ERROR [org.jgroups.stack.DownHandler] DownHandler (FD) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveDownEvent(Protocol.java:503) > at org.jgroups.stack.Protocol.passDown(Protocol.java:533) > at org.jgroups.protocols.FD.down(FD.java:339) > at org.jgroups.stack.DownHandler.run(Protocol.java:121) > 2008-04-26 02:38:38,618 ERROR [org.jgroups.stack.UpHandler] UpHandler (UNICAST) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveUpEvent(Protocol.java:474) > at org.jgroups.stack.Protocol.passUp(Protocol.java:520) > at org.jgroups.protocols.UNICAST.up(UNICAST.java:259) > at org.jgroups.stack.UpHandler.run(Protocol.java:60) > > Running in JBoss. > > keith > > Keith R Holdaway > Java Development Technologies > > SAS The Power to Know > > Carpe Diem > > > -----Original Message----- > From: Keith Holdaway > Sent: Saturday, April 26, 2008 10:28 AM > To: 'Y.S.Ramakrishna at Sun.COM' > Cc: hotspot-gc-use at openjdk.java.net > Subject: Weak References > > If we are seeing a huge build up of weak references: > > sun/rmi/transport/WeakRef => > 1654973 1644293 10680 0 0 Total > 794333 783888 10445 0 0 => com/sas/metadata/remote/MdObjectListImpl > 194966 194966 0 0 0 => com/sas/metadata/remote/impl/PropertyImpl > 192165 192165 0 0 0 => com/sas/services/information/metadata/OMRProperty > > When does the GC algorithm decide to collect? Is there something that can be done programatically to collect earlier? > > I assume GC will not collect until the weak references are "dead", i.e. the referents are available for GC since no strong refs are pointing at the referent? > > Any guidance appreciated. > > keith > > Keith R Holdaway > Java Development Technologies > > SAS The Power to Know > > Carpe Diem > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From jamesnichols3 at gmail.com Mon Apr 28 07:47:05 2008 From: jamesnichols3 at gmail.com (jamesnichols3 at gmail.com) Date: Mon, 28 Apr 2008 14:47:05 +0000 Subject: java.lang.OutOfMemoryError: nativeGetNewTLA In-Reply-To: <4815E2F7.60000@Sun.COM> References: <9BADD5B8-F9DA-4656-843B-7D44FF36963A@mugfu.com><477C09ED.9020400@Sun.COM><22133221-306B-41D3-AE57-155876104354@mugfu.com><477C1718.70004@Sun.COM><304E9E55F6A4BE4B910C2437D4D1B49608FC799162@MERCMBX14.na.sas.com><5d649bdb0801031600j7d36c5e9k4049421726346cf3@mail.gmail.com><477D8067.8050802@Sun.COM><304E9E55F6A4BE4B910C2437D4D1B4960A622D2BEA@MERCMBX14.na.sas.com><4815E2F7.60000@Sun.COM> Message-ID: <1258495703-1209394027-cardhu_decombobulator_blackberry.rim.net-1551813261-@bxe149.bisx.prod.on.blackberry> What version jgroups is it? I don't recognize the stack traces. Jim Sent from my Verizon Wireless BlackBerry -----Original Message----- From: Jon Masamitsu Date: Mon, 28 Apr 2008 07:45:11 To:Keith Holdaway Cc:"hotspot-gc-use at openjdk.java.net" Subject: Re: java.lang.OutOfMemoryError: nativeGetNewTLA Keith Holdaway wrote: > Any ideas what this refers to? I looked through the hotspot sources and didn't find any references to nativeGetNewTLA. > > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.protocols.TP.bufferToMessage(TP.java:972) > at org.jgroups.protocols.TP.handleIncomingPacket(TP.java:829) > at org.jgroups.protocols.TP.access$400(TP.java:45) > at org.jgroups.protocols.TP$IncomingPacketHandler.run(TP.java:1296) > at java.lang.Thread.run(Thread.java:595) > 2008-04-26 02:38:13,411 ERROR [org.jgroups.stack.DownHandler] DownHandler (NAKACK) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.protocols.pbcast.NAKACK.getDigestHighestDeliveredMsgs(NAKACK.java:935) > at org.jgroups.protocols.pbcast.NAKACK.down(NAKACK.java:422) > at org.jgroups.stack.DownHandler.run(Protocol.java:121) > 2008-04-26 02:38:17,471 WARN [org.jgroups.util.TimeScheduler] exception executing task org.jgroups.protocols.pbcast.STABLE$StableTask at 4130b93 > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.protocols.pbcast.STABLE$StableTask.run(STABLE.java:783) > at org.jgroups.util.TimeScheduler$TaskWrapper.run(TimeScheduler.java:204) > at java.util.TimerThread.mainLoop(Timer.java:512) > at java.util.TimerThread.run(Timer.java:462) > 2008-04-26 02:38:20,298 ERROR [org.jgroups.stack.UpHandler] UpHandler (VERIFY_SUSPECT) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveUpEvent(Protocol.java:474) > at org.jgroups.stack.Protocol.passUp(Protocol.java:520) > at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:170) > at org.jgroups.stack.UpHandler.run(Protocol.java:60) > 2008-04-26 02:38:20,298 ERROR [org.jgroups.stack.DownHandler] DownHandler (FD) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveDownEvent(Protocol.java:503) > at org.jgroups.stack.Protocol.passDown(Protocol.java:533) > at org.jgroups.protocols.FD.down(FD.java:339) > at org.jgroups.stack.DownHandler.run(Protocol.java:121) > 2008-04-26 02:38:24,562 ERROR [org.jgroups.stack.DownHandler] DownHandler (UNICAST) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveDownEvent(Protocol.java:503) > at org.jgroups.stack.Protocol.passDown(Protocol.java:533) > at org.jgroups.protocols.UNICAST.down(UNICAST.java:391) > at org.jgroups.stack.DownHandler.run(Protocol.java:121) > 2008-04-26 02:38:25,936 ERROR [org.jgroups.stack.UpHandler] UpHandler (AUTH) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveUpEvent(Protocol.java:474) > at org.jgroups.protocols.pbcast.GMS.receiveUpEvent(GMS.java:788) > at org.jgroups.stack.Protocol.passUp(Protocol.java:520) > at org.jgroups.protocols.AUTH.up(AUTH.java:143) > at org.jgroups.stack.UpHandler.run(Protocol.java:60) > 2008-04-26 02:38:35,854 ERROR [org.jgroups.stack.DownHandler] DownHandler (FD) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveDownEvent(Protocol.java:503) > at org.jgroups.stack.Protocol.passDown(Protocol.java:533) > at org.jgroups.protocols.FD.down(FD.java:339) > at org.jgroups.stack.DownHandler.run(Protocol.java:121) > 2008-04-26 02:38:38,618 ERROR [org.jgroups.stack.UpHandler] UpHandler (UNICAST) caught exception > java.lang.OutOfMemoryError: nativeGetNewTLA > at org.jgroups.util.Queue.addInternal(Queue.java:570) > at org.jgroups.util.Queue.add(Queue.java:143) > at org.jgroups.stack.Protocol.receiveUpEvent(Protocol.java:474) > at org.jgroups.stack.Protocol.passUp(Protocol.java:520) > at org.jgroups.protocols.UNICAST.up(UNICAST.java:259) > at org.jgroups.stack.UpHandler.run(Protocol.java:60) > > Running in JBoss. > > keith > > Keith R Holdaway > Java Development Technologies > > SAS The Power to Know > > Carpe Diem > > > -----Original Message----- > From: Keith Holdaway > Sent: Saturday, April 26, 2008 10:28 AM > To: 'Y.S.Ramakrishna at Sun.COM' > Cc: hotspot-gc-use at openjdk.java.net > Subject: Weak References > > If we are seeing a huge build up of weak references: > > sun/rmi/transport/WeakRef => > 1654973 1644293 10680 0 0 Total > 794333 783888 10445 0 0 => com/sas/metadata/remote/MdObjectListImpl > 194966 194966 0 0 0 => com/sas/metadata/remote/impl/PropertyImpl > 192165 192165 0 0 0 => com/sas/services/information/metadata/OMRProperty > > When does the GC algorithm decide to collect? Is there something that can be done programatically to collect earlier? > > I assume GC will not collect until the weak references are "dead", i.e. the referents are available for GC since no strong refs are pointing at the referent? > > Any guidance appreciated. > > keith > > Keith R Holdaway > Java Development Technologies > > SAS The Power to Know > > Carpe Diem > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use