RFR (M): 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to copy

Andreas Eriksson andreas.eriksson at oracle.com
Mon Dec 14 18:34:51 UTC 2015


Hi Dmitry, thanks for looking at this.


On 2015-12-14 18:30, Dmitry Samersoff wrote:
> Andreas,
>
> Nice cleanup.
>
> Some generic comments below.
>
> 1. It would be excellent if you can split webrev into two parts, one
> part with types cleanup and other part with array truncation related
> changes or ever push these changes separately as it address different
> problems.
>
> Type cleanup could be reviewed quickly, but review of array truncation
> requires some time.
>
> (We already have two different CRs: JDK-8129419 for type cleanup and
> JDK-8144732 for array truncation)

Yes, that's a good point.

>
>
> 2.
> it might be better to use size_t and jlong (or julong) across all file
> and get rid of all other types with a few exclusion.

I'll see what I can do, and we can look at it closer once I've split the 
type changes into a separate webrev.

> 3.
> Each assert costs some time in nightly testing, so we probably don't
> need as much asserts.

Alright.

>
> 4. write_internal:
>
>    There are couple of cases where ::write writes less bytes than
> requested and doesn't return -1.
>    So we should check if ::write returns a value less that len and if it
> happens deal with errno - either repeat entire write
> attempt(EINTR/EAGAIN) or abort writing.

Actually, I meant to ask about that, but forgot:
I tried using os::write, which handles EINTR/EAGAIN if necessary 
(depending on platform), but Solaris has an assert that fails:
http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/file/5a42c1dde332/src/os/solaris/vm/os_solaris.cpp#l5692
It checks that os::write is called by a JavaThread that is in_native, 
which we are not, since we are in the VM_thread doing a safepoint_op.
Does anyone know if that assert is really needed? It's only done for 
Solaris.

>
> 5. we should handle zero-length array in calculate_array_max_length -
> it's a legitimate case

Not sure what you mean here, I believe it does handle zero-length arrays.
It should just fall through without taking any of the if-clauses.
(Or do you mean that I should add a return immediately at the top if 
length is zero?)

> 6. max_juint is less than SegmentedHeapDumpThreshold so non-segmented
> heapdump can't contain huge array and it's better to check for segmented
> dump before any other checks.

Correct me if I'm wrong here, but SegmentedHeapDumpThreshold could in 
theory be set so that we have a non-segmented heap dump while still 
having huge arrays.
Not sure if this is worth taking into account, since it most likely 
would lead to other problems as well, and the flag is develop only, so 
it can't happen in product builds.
What do you think would be best to do here?

- Andreas

>
> -Dmitry
>
>
> On 2015-12-14 18:38, Andreas Eriksson wrote:
>> Hi,
>>
>> Please review this fix for dumping of long arrays, and general cleanup
>> of types in heapDumper.cpp.
>>
>> Problem:
>> At several places in heapDumper.cpp overflows could happen when dumping
>> long arrays.
>> Also the hprof format uses an u4 as a record length field, but arrays
>> can be longer than that (counted in bytes).
>>
>> Fix:
>> Many types that were previously signed are changed to equivalent
>> unsigned types and/or to a larger type to prevent overflow.
>> The bulk of the change though is the addition of
>> calculate_array_max_length, which for a given array returns the number
>> of elements we can dump. That length is then used to truncate arrays
>> that are too long.
>> Whenever an array is truncated a warning is printed:
>> Java HotSpot(TM) 64-Bit Server VM warning: cannot dump array of type
>> object[] with length 1,073,741,823; truncating to length 536,870,908
>>
>> Much of the rest of the change is moving functions needed by
>> calculate_array_max_length to the DumpWriter or DumperSupport class so
>> that they can be accessed.
>>
>> Added a test that relies on the hprof parser, which also had a couple of
>> overflow problems (top repo changes).
>> I've also run this change against a couple of other tests, but they are
>> problematic in JPRT because they are using large heaps and lots of disk.
>>
>> Bug:
>> 8129419: heapDumper.cpp: assert(length_in_bytes > 0) failed: nothing to
>> copy
>> https://bugs.openjdk.java.net/browse/JDK-8129419
>>
>> Also fixed in this change is the problems seen in these two bugs:
>> 8133317: Integer overflow in heapDumper.cpp leads to corrupt HPROF dumps
>> https://bugs.openjdk.java.net/browse/JDK-8133317
>>
>> 8144732: VM_HeapDumper hits assert with bad dump_len
>> https://bugs.openjdk.java.net/browse/JDK-8144732
>>
>> Webrev:
>> Top repo:
>> http://cr.openjdk.java.net/~aeriksso/8129419/webrev.00/jdk9-hs-rt/
>> Hotspot: http://cr.openjdk.java.net/~aeriksso/8129419/webrev.00/hotspot/
>>
>> Regards,
>> Andreas
>



More information about the serviceability-dev mailing list