RFR: 8262386: resourcehogs/serviceability/sa/TestHeapDumpForLargeArray.java timed out [v14]
Serguei Spitsyn
sspitsyn at openjdk.java.net
Thu Aug 26 07:22:31 UTC 2021
On Thu, 26 Aug 2021 04:14:07 GMT, Lin Zang <lzang at openjdk.org> wrote:
>> src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java line 592:
>>
>>> 590: // only process when segmented heap dump is not used, since SegmentedOutputStream
>>> 591: // could create segment automatically.
>>> 592: long currentRecordLength = (dumpEnd - currentSegmentStart - 4L);
>>
>> As you moved this initialization inside the `if (!useSegmentedHeapDump)` condition then the `currentRecordLength` will be now equal to 0 for segmented heap dump as well.
>> Could you, please, confirm it was your intention?
>
> Yes, it is intended.
> The logic here is that when **not** using segmented dump, it needs to get the current data written size from the underlying file position, and then cut the array to satisfy that the U4 `size` slot does not overflow, as comments stats.
> But when it is in segmented dump, the previously written data has already been flushed in the previous segment, it only needs to consider that the current array size + the segment header size could satisfy the `size` slot. so the `currentRecordLength` is zero.
Okay, thanks.
-------------
PR: https://git.openjdk.java.net/jdk/pull/2803
More information about the serviceability-dev
mailing list