RFR: 8327645: Serial heap dump should not consume double amount of disk space [v2]
Man Cao
manc at openjdk.org
Fri Mar 8 09:53:57 UTC 2024
On Fri, 8 Mar 2024 03:35:16 GMT, Alex Menkov <amenkov at openjdk.org> wrote:
>> Man Cao has updated the pull request incrementally with one additional commit since the last revision:
>>
>> Fix failure under -XX:+UseSerialGC
>
> Heap dumper was switched to always use segments by [JDK-8299426](https://bugs.openjdk.org/browse/JDK-8299426) / [JDK-8321565](https://bugs.openjdk.org/browse/JDK-8321565).
> I suppose your fix will cause broken heapdump if there are some unmounted virtual threads.
> You can try to run test/hotspot/jtreg/serviceability/jvmti/vthread/HeapDump/VThreadInHeapDump.java
@alexmenkov Thank you for the quick feedback. I can reproduce the failure with VThreadInHeapDump.java and `-XX:ActiveProcessorCount=1`. The problem seems that the writer cannot write HPROF_FRAME and HPROF_TRACE records in the middle of dumping an HPROF_HEAP_DUMP/HPROF_HEAP_DUMP_SEGMENT, so it resorts to a separate global writer.
Do we really need to keep all the HPROF_FRAME and HPROF_TRACE records located together? If not, two possible solutions are:
1. Keep track of all unmounted vthread oops, and dump their stack traces after finishing the HPROF_HEAP_DUMP_SEGMENT.
2. Write an HPROF_HEAP_DUMP_END, dump the vthread's stack trace, then start another HPROF_HEAP_DUMP_SEGMENT.
First approach seems resulting in a more organized heap dump. In any case, we probably only need to do this for serial heap dump. Parallel heap dump could keep using the current approach.
-------------
PR Comment: https://git.openjdk.org/jdk/pull/18160#issuecomment-1985383273
More information about the hotspot-runtime-dev
mailing list