Time stopped is much longer than corresponding Full GC

Alex Aisinzon aaisinzon at guidewire.com
Fri Mar 27 09:32:08 PDT 2009


Hi all

 

I am running tests with Sun JDK 1.5_14 32 and 64 bits. I am using
ParallelGC with 2 threads on a Dual Core Opteron 280.

On the 64 bit one, I see instances whereby the pause times, as reported
by the "Total time for which application threads were stopped:" is much
longer than the time reported for the full GC.

 

Here is an example of this:

 

{Heap before gc invocations=3693:

 PSYoungGen      total 2021952K, used 55892K [0x00002aabbf130000,
0x00002aac3f130000, 0x00002aac3f130000)

  eden space 1949248K, 0% used
[0x00002aabbf130000,0x00002aabbf130000,0x00002aac360c0000)

  from space 72704K, 76% used
[0x00002aac3a7a0000,0x00002aac3de353b0,0x00002aac3eea0000)

  to   space 72576K, 0% used
[0x00002aac360c0000,0x00002aac360c0000,0x00002aac3a7a0000)

 PSOldGen        total 4194304K, used 4144351K [0x00002aaabf130000,
0x00002aabbf130000, 0x00002aabbf130000)

  object space 4194304K, 98% used
[0x00002aaabf130000,0x00002aabbc067c20,0x00002aabbf130000)

 PSPermGen       total 204800K, used 122025K [0x00002aaab2930000,
0x00002aaabf130000, 0x00002aaabf130000)

  object space 204800K, 59% used
[0x00002aaab2930000,0x00002aaaba05a578,0x00002aaabf130000)

29132.202: [Full GC [PSYoungGen: 55892K->0K(2021952K)] [PSOldGen:
4144351K->2185260K(4194304K)] 4200243K->2185260K(6216256K) [PSPermGen:
122025K->122025K(204800K)], 7.8002190 secs]

Heap after gc invocations=3693:

 PSYoungGen      total 2021952K, used 0K [0x00002aabbf130000,
0x00002aac3f130000, 0x00002aac3f130000)

  eden space 1949248K, 0% used
[0x00002aabbf130000,0x00002aabbf130000,0x00002aac360c0000)

  from space 72704K, 0% used
[0x00002aac3a7a0000,0x00002aac3a7a0000,0x00002aac3eea0000)

  to   space 72576K, 0% used
[0x00002aac360c0000,0x00002aac360c0000,0x00002aac3a7a0000)

 PSOldGen        total 4194304K, used 2185260K [0x00002aaabf130000,
0x00002aabbf130000, 0x00002aabbf130000)

  object space 4194304K, 52% used
[0x00002aaabf130000,0x00002aab4473b2a8,0x00002aabbf130000)

 PSPermGen       total 204800K, used 122025K [0x00002aaab2930000,
0x00002aaabf130000, 0x00002aaabf130000)

  object space 204800K, 59% used
[0x00002aaab2930000,0x00002aaaba05a578,0x00002aaabf130000)

}

Total time for which application threads were stopped: 15.2489360
seconds

Application time: 7.2144290 seconds

 

 

I checked on the web and see descriptions of very similar behaviors:

*	http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6477891
<http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6477891> 
*	http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6546278
<http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6546278> 

 

It remains unclear whether this is fixed in a later release of Sun JDK
1.5. 

If it is indeed the "
<http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6546278>
Synchronization problem in the pseudo memory barrier code". It may be
fixed on JDK 1.7: http://download.java.net/jdk7/changes/jdk7-b15.html
<http://download.java.net/jdk7/changes/jdk7-b15.html> 

 

Our application is currently supported on Sun JDK 1.5 so Sun JDK 1.6 and
1.7, when available, do not constitute good options.

Should I try to upgrade to the latest of Sun JDK 1.5 or are there
"switches" that could make this go away, without sacrificing
performance.

The thing that surprises me most is that is, so far, it occurs only on
the 64 bit JVM version.

 

Thanks for letting me know

 

Alex Aisinzon

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20090327/0538ba88/attachment.html 


More information about the hotspot-gc-use mailing list