custom/fast heap dumper
Krystal Mok
rednaxelafx at gmail.com
Fri Aug 17 23:26:23 UTC 2018
Hi Ying,
Side-stepping from your question a bit: is it absolutely necessary to take
a full heap dump for your use case? Or would it be more feasible to do some
of the analysis that you want online instead of taking a heap dump and then
do offline analysis?
Some analysis that people used to do on heap dump snapshots could be done
with JFR or the new Low-overhead heap profiling feature. Would those be
sufficient, or perhaps would extending those a bit be sufficient for your
use case?
At Azul Systems, the Zing JVM supports some of the analysis piggybacking on
the GC. Since the C4 GC is fully concurrent (*), these operations are not
interruptive at all and has very low overhead.
For OpenJDK, with ZGC on the horizon, this kind of feature piggybacking on
a fully concurrent GC would also be possible.
My two cents,
Kris
* C4 GC can concurrently mark and compact the heap, albeit it does still do
a few very short pauses during the whole concurrent GC cycle. Similar story
with ZGC.
On Fri, Aug 17, 2018 at 3:51 PM, Ying Su <yingsu at fb.com> wrote:
> Hi,
>
> We want to implement a custom fast heap dumper that should work on Java 9
> and 10, because we often need to dump huge heaps (~200GB), and it takes
> 20-30 minutes with jmap (e.g. we thought about zeroing out the large arrays
> and compressing the output, etc.) We’ve been looking at the following 2
> options:
>
>
> 1. Modify the JVMTI demo hprof implementation in JDK8
> 2. Reuse/modify jdk9-dev<https://github.com/netroby/jdk9-dev>/hotspot<
> https://github.com/netroby/jdk9-dev/tree/master/hotspot>/src<h
> ttps://github.com/netroby/jdk9-dev/tree/master/hotspot/src>/share<
> https://github.com/netroby/jdk9-dev/tree/master/hotspot/src/share>/vm<
> https://github.com/netroby/jdk9-dev/tree/master/hotspot/src/share/vm
> >/services<https://github.com/netroby/jdk9-dev/tree/
> master/hotspot/src/share/vm/services>/heapDumper.cpp
>
> We’ve tried the first option but it’s very slow due to some shared hash
> tables causing high lock contention. And the second option is more
> complicated because we need to access internal JVM classes, and we don’t
> know how we can deploy it to production. We’d appreciate if we can get some
> expert opinion on how to best solve this problem.
>
>
> Thank you very much,
> Ying
>
>
>
>
>
>
>
>
More information about the hotspot-dev
mailing list