Hi again,<br><div class="gmail_quote"><div class="Ih2E3d"><br>
I think that the Eden should be increased to filter more objects from<br>
getting promoted to Old.<br>
Could -XX:SoftRefLRUPolicyMSPerMB=1 help here? Perhaps there are lots<br>
of soft references that are clogging the heap?<br>
Another possibility could be finalzers. Are there lots of finalizers> <br> in the Portal Server code? If you have JMX deployed in the server,<br>
you can check the outstanding finalized objects via jConsole.<br>
Otherwise, there are some little known options:<br>
-Djava.finalizer.verbose=[true | false]<br>
-Djava.finalizer.verbose.rate=[frequency of logging in seconds]<br>
-Djava.finalizer.verbose.output=[filename]<br>
-D.java.finalizer.threadCount=[# of finalizer threads]<br>
<br>
The output looks like this:<br>
<br>
</div> <F: version = <a href="http://1.4.2.10" target="_blank">1.4.2.10</a> <<a href="http://1.4.2.10" target="_blank">http://1.4.2.10</a>> ><br>
<div class="Ih2E3d"> <F: java.finalizer.threadCount = 1 ><br>
<F: 1 54982 123064 147 ><br>
<F: 2 170644 166824 284 ><br> <F: 3 251356 172390 94260 ><br>
<F: 4 344950 187071 203 ><br>
<br>
<F: a b c d ><br>
a = sample number<br>
b = milliseconds since start<br>
c = count of live and pending objects with finalizers<br>
d = count of pending objects with finalizers<br>
If the number of pending finalizers continues to increase without<br>
bound or the number of pending<br>
objects is consistently over 100000, you can increase the number of<br>
threads available for finalizing<br>
objects to 2 with the following option:<br>
<br>
-Djava.finalizer.threadCount=2<br>
<br>
Note that you should keep the finalizer.threadCount much lower than<br>
the number of CPUs on the machine – like in the previous discussion of<br>
ParallelGCThreads.<br>
<br>
I have encountered this issue on older version of Tomcat and this<br>
helped reduce the problem.<br>
<br>
Hope this helps,<br>
Fino<br>
><br>
> On Fri, Mar 28, 2008 at 8:39 AM, Jon Masamitsu <<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a><br>
</div><div><div></div><div class="Wj3C7c">> <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a>>> wrote:<br>
><br>
> T.K wrote On 03/27/08 20:19,:<br>
><br>
> > I've attached an old gclog where it has similar behavior on the<br>
> heap.<br>
> > But the problem with that is it takes quite a long time, like 20<br>
> - 70<br>
> > seconds. The server basically went down with customers complaining<br>
> > during that time, so we had to try CMS, but that doesn't work<br>
> out well<br>
> > either.<br>
><br>
><br>
> A significant fraction of your heap is filled with live data. The<br>
> GC is<br>
> running<br>
> almost constantly because it is trying to keep the application<br>
> going by<br>
> recovering<br>
> whatever space it can. If the occupancy of the heap does not go<br>
> down when<br>
> the number of users drops, there's some code in the application<br>
> that is<br>
> keeping objects alive that perhaps are never going to be used<br>
> again. If the<br>
> application really needs all that data, then you need a larger<br>
> heap to keep<br>
> GC from running so frequently. If increasing the size of the<br>
> heap just<br>
> delays the onset of the frequent GC's and you do think that all<br>
> that data<br>
> should be live, you may have to limit the number of users in some<br>
> way so<br>
> as to not fill up the heap.<br>
><br>
> ><br>
> > We are having Sun Portal Server 6.2 on those web servers. Do<br>
> you know<br>
> > is it a normal behavior for Portal Server with about 400-500<br>
> users per<br>
> > instance?<br>
><br>
><br>
> Don't know about this.<br>
><br>
> > Thanks,<br>
> > TK<br>
> ><br>
> > On Thu, Mar 27, 2008 at 5:10 PM, Jon Masamitsu<br>
> <<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a> <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a>><br>
</div></div><div><div></div><div class="Wj3C7c">> > <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a> <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a>>>><br>
> wrote:<br>
> ><br>
> > Late in this log I see<br>
> ><br>
> > 162417.510: [GC 162417.510: [ParNew: 370176K->11520K(381696K),<br>
> > 0.4232456<br>
> > secs] 3055096K->2715938K(3134208K), 0.4240171 secs]<br>
> ><br>
> > At that point the heap is about 85% full<br>
> (2715938K/3134208K). The<br>
> > tenured generation is almost completely full.<br>
> ><br>
> > Do you have similar logs when using the default GC? We<br>
> could use them<br>
> > to verify the amount of live data.<br>
> ><br>
> ><br>
> > T.K wrote:<br>
> > ><br>
> > > Hi Jon,<br>
> > ><br>
> > > Here's the attached gclog. It starts out fine when we put in<br>
> > the change<br>
> > > in the evening. Eventually, the next morning (around 60000<br>
> > seconds on<br>
> > > gc time) when the load starting to get in, CMS starts to run<br>
> > > consectively. The first CMS fails occur when we bump up the<br>
> > users to<br>
> > > 700+, and almost kill the server. Ever since then, I<br>
> don't see<br>
> > the CMS<br>
> > > ever stops, even when the load goes down to 50 users.<br>
> > ><br>
> > > I cut off the logs in between into 3 portion so that I can<br>
> > attach it. :D<br>
> > ><br>
> > ><br>
> > > Thanks,<br>
> > > TK<br>
> > ><br>
> > ><br>
> > > On 3/27/08, *Jon Masamitsu* <<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a><br>
> <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a>><br>
> > <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a> <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a>>><br>
> > > <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a><br>
> <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a>> <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a><br>
> <mailto:<a href="mailto:Jon.Masamitsu@sun.com">Jon.Masamitsu@sun.com</a>>>>><br>
> > wrote:<br>
> > ><br>
> > > Using CMS sometimes needs some tuning (especially<br>
> > > with the 1.4.2 jdk). Do you have any gc logs<br>
> > > (-XX:+PrintGCDetails) so we can see what's happening?<br>
> > ><br>
> > ><br>
> > > T.K wrote:<br>
> > > > Hi All,<br>
> > > > We got 5 Sun Web Servers running on Java 1.4.2, and<br>
> used<br>
> > to use the<br>
> > > > default GC for Tenured space. The problem with that is<br>
> > that it takes<br>
> > > > 60-80 seconds everytime the GC happens, and the latency<br>
> > on the<br>
> > > site goes<br>
> > > > crazy. So we decided to change it to use the Concurrent<br>
> > Mark Sweep<br>
> > > > Collector on one server to test it out. Here's the<br>
> setting:<br>
> > > ><br>
> > > > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -Xms3G -Xmx3G<br>
> > > -XX:NewSize=384M<br>
> > > > -XX:MaxNewSize=384M -XX:PermSize=64M<br>
> -XX:MaxPermSize=64M<br>
> > > > -XX:CMSInitiatingOccupancyFraction=60<br>
> > > ><br>
> > > > With that setting, the server runs great. But<br>
> eventually,<br>
> > when the<br>
> > > > server reach a medium load (around 100-200 users), the<br>
> > tenured<br>
> > > space is<br>
> > > > always around half full, and the CMS collector<br>
> starts to run<br>
> > > > continuously one after another. It doesn't hurt the<br>
> > application<br>
> > > for now,<br>
> > > > but it's taking 25% of processing time (we got 4<br>
> cpu, so<br>
> > one web<br>
> > > server<br>
> > > > always keep 1 cpu power). I don't see that much cpu<br>
> > utilization<br>
> > > on other<br>
> > > > web server that don't have CMS, and they have more<br>
> users<br>
> > than the one<br>
> > > > with CMS. If we got CMS on all 5 web servers, I'm<br>
> > wondering if<br>
> > > that will<br>
> > > > crash the server or not. What should I do to decrease<br>
> > the processor<br>
> > > > utilization caused by GC?<br>
> > > ><br>
> > > > Also, I'm thinking to use i-CMS on the JVM, and maybe<br>
> > that might slow<br>
> > > > down the CMS and reduce the amount of CPU<br>
> utilization by<br>
> > CMS. Any<br>
> > > thought?<br>
> > > ><br>
> > > > Thanks,<br>
> > > > TK<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------<br>
> > > ><br>
> > > > _______________________________________________<br>
> > > > hotspot-gc-use mailing list<br>
> > > > <a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a><br>
> <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a>><br>
> > <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a><br>
> <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a>>><br>
> > > <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a><br>
> <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a>><br>
> > <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a><br>
> <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a>>>><br>
> > > ><br>
> <a href="http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use" target="_blank">http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use</a><br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------<br>
> > ><br>
> > > _______________________________________________<br>
> > > hotspot-gc-use mailing list<br>
> > > <a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a><br>
> <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a>><br>
> > <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a><br>
> <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a>>><br>
> > > <a href="http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use" target="_blank">http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use</a><br>
> ><br>
> ><br>
> >------------------------------------------------------------------------<br>
> ><br>
> ><br>
> ><br>
> ><br>
><br>
> _______________________________________________<br>
> hotspot-gc-use mailing list<br>
> <a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a><br>
> <mailto:<a href="mailto:hotspot-gc-use@openjdk.java.net">hotspot-gc-use@openjdk.java.net</a>><br>
> <a href="http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use" target="_blank">http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use</a><br>
><br>
><br>
><br>
><br>
> --<br>
</div></div>> Michael Finocchiaro<br>
> <a href="mailto:michael.finocchiaro@gmail.com">michael.finocchiaro@gmail.com</a> <mailto:<a href="mailto:michael.finocchiaro@gmail.com">michael.finocchiaro@gmail.com</a>><br>
<div class="Ih2E3d">> Mobile Telephone: +33 6 67 90 64 39<br>
</div><div><div></div><div class="Wj3C7c">> MSN: <a href="mailto:le_fino@hotmail.com">le_fino@hotmail.com</a> <mailto:<a href="mailto:le_fino@hotmail.com">le_fino@hotmail.com</a>><br>
<br>
<br>
<br>
</div></div></div><br><br clear="all"><br>-- <br>Michael Finocchiaro<br><a href="mailto:michael.finocchiaro@gmail.com">michael.finocchiaro@gmail.com</a><br>Mobile Telephone: +33 6 67 90 64 39<br>MSN: <a href="mailto:le_fino@hotmail.com">le_fino@hotmail.com</a>