Strange isssue about OOM

Y Srinivas Ramakrishna Y.S.Ramakrishna at Sun.COM
Tue Jan 15 11:20:22 PST 2008


Hi Chris --

> Thanks for your help.
> 
> Java version "1.4.2_13";
> Oracle Application Server Containers for J2EE 10g (10.1.2.2.0) ;
> No System.gc();
> WebService;
> 
> Would you please explain "HandlePromotionFailure"?Where to see CR
> 6206427.
> 

In older versions of the JVM (such as the one you are using), we didn't have
a mechanism to recover from a partial scavenge if the scavenge could
not be completed because of lack of sufficient space in the old generation
to promote tenured objects into. So we used to be very conservative in
starting a scavenge and probably conservative in precipitating a full
collection if we felt the next scavenge would not succeed. This conservatism
could cause us to waste free space in the old generation. See for example:-

http://java.sun.com/docs/hotspot/gc1.4.2/#4.4.2.%20Young%20Generation%20Guarantee|outline

In later JVM's, we added the ability to recover from a partially completed
scavenge and to compact the entire heap. This allowed us to be less
pessimistic about going into a scavenge, and would waste less space in
the old generation even when you run with large young generations. See:-

http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html#0.0.0.0.%20Young%20Generation%20Guarantee%7Coutline

I'd actually recommend upgrading to a more recent JVM where you'd
see performance improvements, but if that is not possible, then
with 1.4.2_13, you should use -XX:+HandlePromotionFailure which
should allow you to make more efficient use of the heap by avoiding
the issue described above.

As to the question of why you need to do, for example, the following
scavenge :-

> > 25313.039: [GC 25313.039: [DefNew: 2421K->91K(169600K), 0.0079322
> > secs]25313.047: [Tenured: 220264K->220182K(349568K), 1.3464051 secs]

My only guess is that the application may be requesting the allocation of a
large object. But that's a guess. (Yes, we should ideally have that information
be available from the gc logs.)

-- ramki




More information about the hotspot-gc-use mailing list