RFR: 8144732: VM_HeapDumper hits assert with bad dump_len

Dmitry Samersoff dmitry.samersoff at oracle.com
Mon Feb 8 12:28:03 UTC 2016


Andreas,

Sorry for delay.

Code changes looks good for me.

But behavior of non-segmented dump is not clean for me (1074, if dump is
not segmented than the size of the dump is always less than max_bytes
and code below (1086) will not be executed.

I think today we may always write a segmented heap dump and
significantly simplify logic.

Also, I think that current_record_length() don't need as much asserts.
one assert(dump_end == (size_t)current_offset(), "checking"); is enough.

-Dmitry

On 2016-02-01 19:20, Andreas Eriksson wrote:
> Hi,
> 
> Please review this fix for dumping of long arrays.
> 
> Bug:
> 8144732: VM_HeapDumper hits assert with bad dump_len
> https://bugs.openjdk.java.net/browse/JDK-8144732
> 
> Webrev:
> http://cr.openjdk.java.net/~aeriksso/8144732/webrev.00/
> 
> Problem:
> The hprof format uses an u4 as a record length field, but arrays can be
> longer than that (counted in bytes).
> 
> Fix:
> Truncate the dump length of the array using a new function,
> calculate_array_max_length. For a given array it returns the number of
> elements we can dump. That length is then used to truncate arrays that
> are too long.
> Whenever an array is truncated a warning is printed:
> Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
> object[] with length 1,073,741,823; truncating to length 536,870,908
> 
> Much of the change is moving functions needed by
> calculate_array_max_length to the DumpWriter or DumperSupport class so
> that they can be accessed.
> 
> Regards,
> Andreas


-- 
Dmitry Samersoff
Oracle Java development team, Saint Petersburg, Russia
* I would love to change the world, but they won't give me the sources.


More information about the serviceability-dev mailing list