G1: higher perm gen footprint or a possible perm gen leak?

YU ZHANG yu.zhang at oracle.com
Fri Jan 3 11:53:34 PST 2014


Ramki,

The perm gen data would be very interesting.

And thanks for correcting me on my previous post:

"One small correction: CMS collects perm gen in major gc cycles, albeit 
concurrently with that flag enabled. The perm gen isn't cleaned at a 
minor gc with any of our collectors, since
global reachability isn't checked at minor gc's."

Thanks,
Jenny

On 1/3/2014 11:30 AM, Srinivas Ramakrishna wrote:
> Thanks everyone for sharing yr experiences. As I indicated, I do 
> realize that G1 does not collect perm gen concurrently.
> What was surprising was that G1's use of perm gen was much higher 
> following its stop-world full gc's
> which would have collected the perm gen. As a result, G1 needed a perm 
> gen quite a bit more than twice that
> given to parallel gc to be able to run an application for a certain 
> length of time.
>
> I'll provide more data on perm gen dynamics when I have it. My guess 
> would be that somehow G1's use of
> regions in the perm gen is causing a dilation of perm gen footprint on 
> account of fragmentation in the G1 perm
> gen regions. If that were the case, I would expect a modest increase 
> in the perm gen footprint, but it seemed the increase in
> footprint was much higher. I'll collect and post more concrete numbers 
> when I get a chance.
>
> -- ramki
>
>
>
> On Fri, Jan 3, 2014 at 10:05 AM, YU ZHANG <yu.zhang at oracle.com 
> <mailto:yu.zhang at oracle.com>> wrote:
>
>     Very interesting post.  Like someone mentioned in the comments,
>     with -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled, CMS
>     can clean classes in PermGen with minor GC.  But G1 can only
>     unload class during full gc.  Full GC in G1 is slow as it is
>     single threaded.
>
>     Thanks,
>     Jenny
>
>     On 1/3/2014 7:47 AM, Jose Otavio Carlomagno Filho wrote:
>>     We recently switched to G1 in our application and started
>>     experiencing this type of behaviour too. Turns out G1 was not
>>     causing the problem, it was only exposing it to us.
>>
>>     Our application would generate a large number of proxy classes
>>     and that would cause the Perm Gen to fill up until a full GC was
>>     performed by G1. When using ParallelOldGC, this would not happen
>>     because full GCs would be executed much more frequently (when the
>>     old gen was full), which prevented the perm gen from filling up.
>>
>>     You can find more info about our problem and our analysis here:
>>     http://stackoverflow.com/questions/20274317/g1-garbage-collector-perm-gen-fills-up-indefinitely-until-a-full-gc-is-performe
>>
>>     I recommend you use a profiling too to investigate the root cause
>>     of your Perm Gen getting filled up. There's a chance it is a
>>     leak, but as I said, in our case, it was our own application's
>>     fault and G1 exposed the problem to us.
>>
>>     Regards,
>>     Jose
>>
>>
>>     On Fri, Jan 3, 2014 at 1:33 PM, Wolfgang Pedot
>>     <wolfgang.pedot at finkzeit.at <mailto:wolfgang.pedot at finkzeit.at>>
>>     wrote:
>>
>>         Hi,
>>
>>         I am using G1 on 7u45 for an application-server which has a
>>         "healthy"
>>         permGen churn because it generates a lot of short-lived
>>         dynamic classes
>>         (JavaScript). Currently permGen is sized at a little over 1GB and
>>         depending on usage there can be up to 2 full GCs per day
>>         (usually only
>>         1). I have not noticed an increased permGen usage with G1
>>         (increased
>>         size just before switching to G1) but I have noticed
>>         something odd about
>>         the permGen-usage after a collect. The class-count will
>>         always fall back
>>         to the same level which is currently 65k but the permGen
>>         usage after
>>         collect can either be ~0.8GB or ~0.55GB. There are always 3
>>         collects
>>         resulting in 0.8GB followed by one scoring 0.55GB so there
>>         seems to be
>>         some kind of "rythm" going on. The full GCs are always
>>         triggered by
>>         permGen getting full and the loaded class count goes
>>         significantly
>>         higher after a 0.55GB collect (165k vs 125k) so I guess some
>>         classes
>>         just get unloaded later...
>>
>>         I can not tell if this behaviour is due to G1 or some other
>>         factor in
>>         this application but I do know that I have no leak because the
>>         after-collect values are fairly stable over weeks.
>>
>>         So I have not experienced this but am sharing anyway ;)
>>
>>         happy new year
>>         Wolfgang
>>
>>         Am 03.01.2014 10:12, schrieb Srinivas Ramakrishna:
>>         > I haven't narrowed it down sufficiently yet, but has anyone
>>         noticed if
>>         > G1 causes a higher perm gen footprint or, worse, a perm gen
>>         leak perhaps?
>>         > I do realize that G1 does not today (as of 7u40 at least)
>>         collect the
>>         > perm gen concurrently, rather deferring its collection to a
>>         stop-world full
>>         > gc. However, it has just come to my attention that despite full
>>         > stop-world gc's (on account of the perm gen getting full),
>>         G1 still uses
>>         > more perm gen
>>         > space (in some instacnes substantially more) than
>>         ParallelOldGC even
>>         > after the full stop-world gc's, in some of our experiments.
>>         (PS: Also
>>         > noticed
>>         > that the default gc logging for G1 does not print the perm
>>         gen usage at
>>         > full gc, unlike other collectors; looks like an oversight
>>         in logging
>>         > perhaps one
>>         > that has been fixed recently; i was on 7u40 i think.)
>>         >
>>         > While I need to collect more data using non-ParallelOld, non-G1
>>         > collectors (escpeially CMS) to see how things look and to
>>         get closer to
>>         > the root
>>         > cause, I wondered if anyone else had come across a similar
>>         issue and to
>>         > check if this is a known issue.
>>         >
>>         > I'll post more details after gathering more data, but in
>>         case anyone has
>>         > experienced this, please do share.
>>         >
>>         > thank you in advance, and Happy New Year!
>>         > -- ramki
>>         >
>>         >
>>         > _______________________________________________
>>         > hotspot-gc-use mailing list
>>         > hotspot-gc-use at openjdk.java.net
>>         <mailto:hotspot-gc-use at openjdk.java.net>
>>         > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>         >
>>
>>         _______________________________________________
>>         hotspot-gc-use mailing list
>>         hotspot-gc-use at openjdk.java.net
>>         <mailto:hotspot-gc-use at openjdk.java.net>
>>         http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>
>>
>>
>>
>>     _______________________________________________
>>     hotspot-gc-use mailing list
>>     hotspot-gc-use at openjdk.java.net  <mailto:hotspot-gc-use at openjdk.java.net>
>>     http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>
>     _______________________________________________
>     hotspot-gc-use mailing list
>     hotspot-gc-use at openjdk.java.net
>     <mailto:hotspot-gc-use at openjdk.java.net>
>     http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140103/dffcd240/attachment.html 


More information about the hotspot-gc-use mailing list