[8u] RFR backport of JDK-8144732: VM_HeapDumper hits assert with bad dump_len

Hohensee, Paul hohensee at amazon.com
Wed Nov 6 23:07:24 UTC 2019


I found http://jperfanal.sourceforge.net/java.hprof.txt, which is a thread stack dump that references 1.0.1, is dated Dec 30, 2001, and the dump output is copyright 1998. So 1.0.1 is probably from 1998.

I found a file format spec at http://hg.openjdk.java.net/jdk6/jdk6/jdk/raw-file/tip/src/share/demo/jvmti/hprof/manual.html#mozTocId848088. It's from Java 6, so 1.0.2 was supported then. I also found

https://bugs.openjdk.java.net/browse/JDK-6305542: HPROF binary format needs to support large dumps

for Java 6, and

https://bugs.openjdk.java.net/browse/JDK-6313381: HPROF: agent should generate version 1.0.2 for large heaps

which updated the hprof agent to generate 1.0.2 format files for heaps > 4gb in Java 6, and

https://bugs.openjdk.java.net/browse/JDK-6313383: SA: Update jmap to support HPROF binary format "JAVA PROFILE 1.0.2"

which was shipped in 8u25 in 2014.

So, JDKs/JREs starting with Java 6 can read 1.0.2 files, and the SA can read them starting with 8u25. I don't think we need to worry about using Java 5 to read files generated by Java 8, and the SA is good to go for 8.

Paul

On 10/31/19, 6:27 AM, "jdk8u-dev on behalf of Andrew John Hughes" <jdk8u-dev-bounces at openjdk.java.net on behalf of gnu.andrew at redhat.com> wrote:

    
    
    On 25/09/2019 07:25, Denghui Dong wrote:
    > Hi all, 
    >   I'd like to request a backport of JDK-8144732.
    > 
    >   In our production environment, many application use large heap, and there are some
    > big arrays in the heap. When developers use jmap to dump heap, and use Eclipse MAT(mostly) 
    > or jhat to analyze the file, often got error. For example:
    > 
    > public class BigArray {
    >   public static void main(String[] args) throws Exception {
    >     long[] b = new long[1024 * 1024 * 1024 / 2];
    >     Object o = new Object();
    >     synchronized(o) {
    >       o.wait(60000);
    >     }
    >   }
    > }
    > 
    >   If you run the above code, and use jmap to generate a dump file, then use jhat to parse it,
    > you will got a warning message:
    > 
    > "WARNING: Error reading heap dump or heap dump segment:  Byte count is -4294967296 instead of 0"
    > 
    >   Eclipse MAT also can't parse the dump file correctly.
    > 
    >   The root cause is the length of the segment exceeds the limit.
    > 
    >   I found that JDK-8144732 can resolve this problem, because it can truncate the array whose
    > size is too large and ensure a segment length within limit.
    > 
    > The patch (from JDK9) doesn't apply cleanly.
    > 
    > Original bug: https://bugs.openjdk.java.net/browse/JDK-8150432
    > 
    > Original patch: http://hg.openjdk.java.net/jdk9/dev/jdk/rev/91a26000bfb5
    > 
    > My webrev: http://cr.openjdk.java.net/~luchsh/8144732_8u/
    > 
    > Testing:
    >   jdk/test/demo/jvmti/hprof/HeapDumpTest.java passed.
    >   jdk/test/sun/tools/jhat/HatHeapDump1Test.java passed.
    > 
    >   What's your comments ?
    > 
    > Thanks
    > Denghui Dong
    > 
    
    I'm concerned that this alters the HPROF format used for dumps under 2GB
    [0] Is there no other way of fixing the bug without altering this, which
    may have an impact on tools expecting to parse HPROF 1.0.1 format data.
    
    I guess HPROF 1.0.2 support was already required for larger dumps, so
    I'm not sure how much of an issue this is, but it's definitely a
    compatibility change.
    
    [0] https://bugs.openjdk.java.net/browse/JDK-8174881
    -- 
    Andrew :)
    
    Senior Free Java Software Engineer
    Red Hat, Inc. (http://www.redhat.com)
    
    PGP Key: ed25519/0xCFDA0F9B35964222 (hkp://keys.gnupg.net)
    Fingerprint = 5132 579D D154 0ED2 3E04  C5A0 CFDA 0F9B 3596 4222
    https://keybase.io/gnu_andrew
    
    



More information about the jdk8u-dev mailing list