RFR: 8144732: VM_HeapDumper hits assert with bad dump_len
Dmitry Samersoff
dmitry.samersoff at oracle.com
Mon Feb 22 07:45:49 UTC 2016
Andreas,
56 header 1.0.2 only now
Besides that - Looks good for me!
-Dmitry
On 2016-02-16 16:55, Andreas Eriksson wrote:
> Hi,
>
> New webrev with support for non-segmented dumps removed:
> http://cr.openjdk.java.net/~aeriksso/8144732/webrev.01/
>
> I changed calculate_array_max_length a bit (in addition to removing the
> is_segmented check), to avoid a type underflow that could happen.
>
> On 2016-02-08 15:40, Dmitry Samersoff wrote:
>> Andreas,
>>
>> On 2016-02-08 17:18, Andreas Eriksson wrote:
>>> Hi,
>>>
>>> On 2016-02-08 13:28, Dmitry Samersoff wrote:
>>>> Andreas,
>>>>
>>>> Sorry for delay.
>>>>
>>>> Code changes looks good for me.
>>>>
>>>> But behavior of non-segmented dump is not clean for me (1074, if dump is
>>>> not segmented than the size of the dump is always less than max_bytes
>>>> and code below (1086) will not be executed.
>>>>
>>>> I think today we may always write a segmented heap dump and
>>>> significantly simplify logic.
>>> Do you mean remove support for dumping non-segmented entirely?
>> Correct!
>>
>> I think it brings more problems than solves (but it's my personal opinion).
>>
>>> Could I do that as part of this fix? And would it require a CCC request?
>> I guess not, it doesn't change anything visible for outside world but
>> let me re-check it.
>>
>> Nothing have changed for heaps larger that 2Gb, dump of small heap will
>> contain one more header.
>>
>> All existing tools have to support both segmented and not segmented
>> dumps so it shouldn't break anything.
>>
>> PS: Please also update:
>>
>> ./share/vm/runtime/globals.hpp
>> 1058: develop(size_t, SegmentedHeapDumpThreshold, 2*G,
>>
>> ./serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java
>>
>> -Dmitry
>>
>>>> Also, I think that current_record_length() don't need as much asserts.
>>>> one assert(dump_end == (size_t)current_offset(), "checking"); is enough.
>>> I'd like to keep the assert(dump_len <= max_juint, "bad dump length");
>>> in case there is ever a similar problem on some other code path if
>>> that's ok with you. I have no problem with removing the dump_start
>>> assert though.
>
> I removed this assert after all, since at a later point there is a
> warning("record is too large"); that checks the same thing.
>
> The hprof parser also has more problems with long arrays (see for
> example JDK-8149790 <https://bugs.openjdk.java.net/browse/JDK-8149790>)
> but I think that will require larger changes so I won't do it as part of
> this fix.
>
> Regards,
> Andreas
>
>> OK.
>>
>> -Dmitry
>>
>>
>>> Regards,
>>> Andreas
>>>
>>>> -Dmitry
>>>>
>>>> On 2016-02-01 19:20, Andreas Eriksson wrote:
>>>>> Hi,
>>>>>
>>>>> Please review this fix for dumping of long arrays.
>>>>>
>>>>> Bug:
>>>>> 8144732: VM_HeapDumper hits assert with bad dump_len
>>>>> https://bugs.openjdk.java.net/browse/JDK-8144732
>>>>>
>>>>> Webrev:
>>>>> http://cr.openjdk.java.net/~aeriksso/8144732/webrev.00/
>>>>>
>>>>> Problem:
>>>>> The hprof format uses an u4 as a record length field, but arrays can be
>>>>> longer than that (counted in bytes).
>>>>>
>>>>> Fix:
>>>>> Truncate the dump length of the array using a new function,
>>>>> calculate_array_max_length. For a given array it returns the number of
>>>>> elements we can dump. That length is then used to truncate arrays that
>>>>> are too long.
>>>>> Whenever an array is truncated a warning is printed:
>>>>> Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
>>>>> object[] with length 1,073,741,823; truncating to length 536,870,908
>>>>>
>>>>> Much of the change is moving functions needed by
>>>>> calculate_array_max_length to the DumpWriter or DumperSupport class so
>>>>> that they can be accessed.
>>>>>
>>>>> Regards,
>>>>> Andreas
>>
>
--
Dmitry Samersoff
Oracle Java development team, Saint Petersburg, Russia
* I would love to change the world, but they won't give me the source code.
More information about the serviceability-dev
mailing list