CMS Garbage collection eating up processor power

Jon Masamitsu Jon.Masamitsu at Sun.COM
Fri Mar 28 07:39:57 UTC 2008


T.K wrote On 03/27/08 20:19,:

> I've attached an old gclog where it has similar behavior on the heap. 
> But the problem with that is it takes quite a long time, like 20 - 70
> seconds.  The server basically went down with customers complaining
> during that time, so we had to try CMS, but that doesn't work out well
> either.


A significant fraction of your heap is filled with live data.  The GC is
running
almost constantly because it is trying to keep the application going by
recovering
whatever space it can.  If the occupancy of the heap does not go down when
the number of users drops, there's some code in the application that is
keeping objects alive that perhaps are never going to be used again.  If the
application really needs all that data, then you need a larger heap to keep
GC from running so frequently.   If increasing the size of the heap just
delays the onset of the frequent GC's and you do think that all that data
should be live, you may have to limit the number of users in some way so
as to not fill up the heap.

>  
> We are having Sun Portal Server 6.2 on those web servers.  Do you know
> is it a normal behavior for Portal Server with about 400-500 users per
> instance?


Don't know about this.

> Thanks,
> TK
>
> On Thu, Mar 27, 2008 at 5:10 PM, Jon Masamitsu <Jon.Masamitsu at sun.com
> <mailto:Jon.Masamitsu at sun.com>> wrote:
>
>     Late in this log I see
>
>     162417.510: [GC 162417.510: [ParNew: 370176K->11520K(381696K),
>     0.4232456
>     secs] 3055096K->2715938K(3134208K), 0.4240171 secs]
>
>     At that point the heap is about 85% full (2715938K/3134208K).  The
>     tenured generation is almost completely full.
>
>     Do you have similar logs when using the default GC?  We could use them
>     to verify the amount of live data.
>
>
>     T.K wrote:
>     >
>     > Hi Jon,
>     >
>     > Here's the attached gclog.  It starts out fine when we put in
>     the change
>     > in the evening.  Eventually, the next morning (around 60000
>     seconds on
>     > gc time) when the load starting to get in, CMS starts to run
>     > consectively.  The first CMS fails occur when we bump up the
>     users to
>     > 700+, and almost kill the server.  Ever since then, I don't see
>     the CMS
>     > ever stops, even when the load goes down to 50 users.
>     >
>     > I cut off the logs in between into 3 portion so that I can
>     attach it.  :D
>     >
>     >
>     > Thanks,
>     > TK
>     >
>     >
>     > On 3/27/08, *Jon Masamitsu* <Jon.Masamitsu at sun.com
>     <mailto:Jon.Masamitsu at sun.com>
>     > <mailto:Jon.Masamitsu at sun.com <mailto:Jon.Masamitsu at sun.com>>>
>     wrote:
>     >
>     >     Using CMS sometimes needs some tuning (especially
>     >     with the 1.4.2 jdk).  Do you have any gc logs
>     >     (-XX:+PrintGCDetails) so we can see what's happening?
>     >
>     >
>     >     T.K wrote:
>     >      > Hi All,
>     >      > We got 5 Sun Web Servers running on Java 1.4.2, and used
>     to use the
>     >      > default GC for Tenured space. The problem with that is
>     that it takes
>     >      > 60-80 seconds everytime the GC happens, and the latency
>     on the
>     >     site goes
>     >      > crazy. So we decided to change it to use the Concurrent
>     Mark Sweep
>     >      > Collector on one server to test it out. Here's the setting:
>     >      >
>     >      > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -Xms3G -Xmx3G
>     >     -XX:NewSize=384M
>     >      > -XX:MaxNewSize=384M -XX:PermSize=64M -XX:MaxPermSize=64M
>     >      > -XX:CMSInitiatingOccupancyFraction=60
>     >      >
>     >      > With that setting, the server runs great. But eventually,
>     when the
>     >      > server reach a medium load (around 100-200 users), the
>     tenured
>     >     space is
>     >      > always around half full, and the CMS collector starts to run
>     >      > continuously one after another. It doesn't hurt the
>     application
>     >     for now,
>     >      > but it's taking 25% of processing time (we got 4 cpu, so
>     one web
>     >     server
>     >      > always keep 1 cpu power). I don't see that much cpu
>     utilization
>     >     on other
>     >      > web server that don't have CMS, and they have more users
>     than the one
>     >      > with CMS. If we got CMS on all 5 web servers, I'm
>     wondering if
>     >     that will
>     >      > crash the server or not.  What should I do to decrease
>     the processor
>     >      > utilization caused by GC?
>     >      >
>     >      > Also, I'm thinking to use i-CMS on the JVM, and maybe
>     that might slow
>     >      > down the CMS and reduce the amount of CPU utilization by
>     CMS. Any
>     >     thought?
>     >      >
>     >      > Thanks,
>     >      > TK
>     >      >
>     >      >
>     >      >
>     >    
>     ------------------------------------------------------------------------
>     >      >
>     >      > _______________________________________________
>     >      > hotspot-gc-use mailing list
>     >      > hotspot-gc-use at openjdk.java.net
>     <mailto:hotspot-gc-use at openjdk.java.net>
>     >     <mailto:hotspot-gc-use at openjdk.java.net
>     <mailto:hotspot-gc-use at openjdk.java.net>>
>     >      > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>     >
>     >
>     >
>     >
>     ------------------------------------------------------------------------
>     >
>     > _______________________________________________
>     > hotspot-gc-use mailing list
>     > hotspot-gc-use at openjdk.java.net
>     <mailto:hotspot-gc-use at openjdk.java.net>
>     > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>
>------------------------------------------------------------------------
>
>
>  
>

_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use



More information about the hotspot-gc-dev mailing list