RFR: 8262386: resourcehogs/serviceability/sa/TestHeapDumpForLargeArray.java timed out

Lin Zang lzang at openjdk.java.net
Wed Mar 24 06:49:47 UTC 2021


On Tue, 23 Mar 2021 23:25:10 GMT, Chris Plummer <cjplummer at openjdk.org> wrote:

>> Dear Chris,
>> 
>>> I guess I don't understand why you would want write-through for small arrays but not large objects.
>> 
>> I think it is because that the current implementation does not have code which could calculate the object size before scan it. but it has the logic for calculate array length (`calculateArrayMaxLength()`) ahead of scaning.  ( BTW, I am planing to add the logic for object, and then rewrite the whole heap dump impl to use writeThrough also for compressed dump, I think that should be in a separate PR)
>> 
>>> But all this seems to be doing is grouping the HPROF_HEAP_DUMP records into an array rather than having them interspersed with other types of records. How does this help, and why would this mode not always be enabled?
>> 
>> I think the original pupose of SEGMENT heap dump is to handle the case for large heap. In the hprof format spec, the size slot of the heap dump is 32bit, which means it has limitation of dump 4GB used heap.  so use segmental dump could help resolve the problem. And IMO the reason for not always enable it is because every segment has a header and tail, which may introduce extra memory, althogh it is not much.
>> 
>>> It looks like both HPROF_HEAP_DUMP_SEGMENT and HPROF_HEAP_DUMP_END are followed by a 4-byte size and 4-byte timestamp, although the timestamp seems to always be 0, and the size for END is also always 0.
>> 
>> My understand  from code is that 4-byte size and 4-byte timestamp only exist after HPROF_HEAP_DUMP_SEGMENT, and the HPROF_HEAP_DUMP_END actually is the end of the segment.  So I think the specification is not accurate.
>> 
>> Thanks,
>> Lin
>
>> > I guess I don't understand why you would want write-through for small arrays but not large objects.
>> 
>> I think it is because that the current implementation does not have code which could calculate the object size before scan it. but it has the logic for calculate array length (`calculateArrayMaxLength()`) ahead of scaning. ( BTW, I am planing to add the logic for object, and then rewrite the whole heap dump impl to use writeThrough also for compressed dump, I think that should be in a separate PR)
> 
> Can't you just call `Oop.getObjectSize()`?
> 
>> > But all this seems to be doing is grouping the HPROF_HEAP_DUMP records into an array rather than having them interspersed with other types of records. How does this help, and why would this mode not always be enabled?
>> 
>> I think the original pupose of SEGMENT heap dump is to handle the case for large heap. In the hprof format spec, the size slot of the heap dump is 32bit, which means it has limitation of dump 4GB used heap. so use segmental dump could help resolve the problem. And IMO the reason for not always enable it is because every segment has a header and tail, which may introduce extra memory, althogh it is not much.
> 
> Ok. So `HPROF_HEAP_DUMP` is just a record, and records have a 32-bit size limit. I assume that previously only one such record was allowed. So `HPROF_HEAP_DUMP_SEGMENT` was created, and the only difference between it and `HPROF_HEAP_DUMP` is that you can have more than one `HPROF_HEAP_DUMP_SEGMENT`. Am I understanding it correctly?

Dear Chris,

> Can't you just call `Oop.getObjectSize()`?

Yes, I can use it, then I think there is no need to use segmental heap dump when heap used bytes is not large than 4G. so there may be lots of code change. do you think it is ok to include change in this bug fix? or I just keep the segmental heap dump codes and use the write through mode for object dump in this fix.


> Ok. So `HPROF_HEAP_DUMP` is just a record, and records have a 32-bit size limit. I assume that previously only one such record was allowed. So `HPROF_HEAP_DUMP_SEGMENT` was created, and the only difference between it and `HPROF_HEAP_DUMP` is that you can have more than one `HPROF_HEAP_DUMP_SEGMENT`. Am I understanding it correctly?

Yes, I think so.

Thanks,
Lin

-------------

PR: https://git.openjdk.java.net/jdk/pull/2803


More information about the serviceability-dev mailing list