RFR: 8144732: VM_HeapDumper hits assert with bad dump_len
Dmitry Samersoff
dmitry.samersoff at oracle.com
Mon Feb 8 14:40:01 UTC 2016
Andreas,
On 2016-02-08 17:18, Andreas Eriksson wrote:
> Hi,
>
> On 2016-02-08 13:28, Dmitry Samersoff wrote:
>> Andreas,
>>
>> Sorry for delay.
>>
>> Code changes looks good for me.
>>
>> But behavior of non-segmented dump is not clean for me (1074, if dump is
>> not segmented than the size of the dump is always less than max_bytes
>> and code below (1086) will not be executed.
>>
>> I think today we may always write a segmented heap dump and
>> significantly simplify logic.
>
> Do you mean remove support for dumping non-segmented entirely?
Correct!
I think it brings more problems than solves (but it's my personal opinion).
> Could I do that as part of this fix? And would it require a CCC request?
I guess not, it doesn't change anything visible for outside world but
let me re-check it.
Nothing have changed for heaps larger that 2Gb, dump of small heap will
contain one more header.
All existing tools have to support both segmented and not segmented
dumps so it shouldn't break anything.
PS: Please also update:
./share/vm/runtime/globals.hpp
1058: develop(size_t, SegmentedHeapDumpThreshold, 2*G,
./serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java
-Dmitry
>
>>
>> Also, I think that current_record_length() don't need as much asserts.
>> one assert(dump_end == (size_t)current_offset(), "checking"); is enough.
>
> I'd like to keep the assert(dump_len <= max_juint, "bad dump length");
> in case there is ever a similar problem on some other code path if
> that's ok with you. I have no problem with removing the dump_start
> assert though.
OK.
-Dmitry
>
> Regards,
> Andreas
>
>>
>> -Dmitry
>>
>> On 2016-02-01 19:20, Andreas Eriksson wrote:
>>> Hi,
>>>
>>> Please review this fix for dumping of long arrays.
>>>
>>> Bug:
>>> 8144732: VM_HeapDumper hits assert with bad dump_len
>>> https://bugs.openjdk.java.net/browse/JDK-8144732
>>>
>>> Webrev:
>>> http://cr.openjdk.java.net/~aeriksso/8144732/webrev.00/
>>>
>>> Problem:
>>> The hprof format uses an u4 as a record length field, but arrays can be
>>> longer than that (counted in bytes).
>>>
>>> Fix:
>>> Truncate the dump length of the array using a new function,
>>> calculate_array_max_length. For a given array it returns the number of
>>> elements we can dump. That length is then used to truncate arrays that
>>> are too long.
>>> Whenever an array is truncated a warning is printed:
>>> Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
>>> object[] with length 1,073,741,823; truncating to length 536,870,908
>>>
>>> Much of the change is moving functions needed by
>>> calculate_array_max_length to the DumpWriter or DumperSupport class so
>>> that they can be accessed.
>>>
>>> Regards,
>>> Andreas
>>
>
--
Dmitry Samersoff
Oracle Java development team, Saint Petersburg, Russia
* I would love to change the world, but they won't give me the sources.
More information about the serviceability-dev
mailing list