Re: Increasing heap usage with recording stream
Denghui Dong
denghui.ddh at alibaba-inc.com
Tue Dec 8 15:11:58 UTC 2020
> That's not what I observe, unfortunately. I've adjusted the program to run
> 5 * that 100M allocation loop which takes about 110 sec for me. Heap size
> is linearly growing and the GC info in the JFR recording file shows the
> increasing heap size post GCs. Below is the updated program. The heap size
> also doesn't shrink after that last GC after closing the stream.
I think this problem is caused by somewhere missing an appropriate call to EventParser#resetCache in the current implementation.
------------------------------------------------------------------
From:Gunnar Morling <gunnar at hibernate.org>
Send Time:2020年12月8日(星期二) 22:40
To:Erik Gahlin <erik.gahlin at oracle.com>
Cc:Gunnar Morling <gunnar at hibernate.org>; hotspot-jfr-dev <hotspot-jfr-dev at openjdk.java.net>
Subject:Re: Increasing heap usage with recording stream
Hi Denghui, Erik,
Thanks for your replies. I can confirm heap size remains constant if I call
stream.setOrdered(false);
> What happens if you run the program for a longer period of time? It
should stabilise after 10-15 seconds
That's not what I observe, unfortunately. I've adjusted the program to run
5 * that 100M allocation loop which takes about 110 sec for me. Heap size
is linearly growing and the GC info in the JFR recording file shows the
increasing heap size post GCs. Below is the updated program. The heap size
also doesn't shrink after that last GC after closing the stream.
--Gunnar
import jdk.jfr.consumer.RecordingStream;
public class RecordingStreamTest {
public static byte[] bytes;
public static void main(String[] args) throws Exception {
RecordingStream stream = new RecordingStream();
stream.enable("jdk.ObjectAllocationInNewTLAB");
stream.enable("jdk.ObjectAllocationOutsideTLAB");
stream.onEvent(re -> {});
stream.startAsync();
for (int j = 1; j <= 5; j++) {
for (int i = 1; i <= 100_000_000; i++) {
bytes = new byte[1024];
}
System.gc();
}
stream.close();
stream.awaitTermination();
System.gc();
Thread.sleep(5000);
}
}
Am Di., 8. Dez. 2020 um 14:59 Uhr schrieb Erik Gahlin <
erik.gahlin at oracle.com>:
> Hi Gunnar,
>
> By default, a recording stream sort events and reuse event objects. This
> means there needs to be some caching. For example, if you emit 500 000
> events within one second, all those event must be kept in memory before the
> sorting algorithm can be invoked. When the next second happens, the cache
> will remember those events object. Since events have different field layout
> the caching happens per event type.
>
> I think this is what you are seeing.
>
> If sorting is not important, you can set
> RecordingStream::setOrdered(false) and only one RecordedEvent will cached
> per event type. If you set RecordingStream::setReuse(false), no caching
> will occur. Without caching, the observer effect will increase and you are
> more likely to trigger a GC due to the events you stream.
>
> I can’t rule out a memory-leak, but the code you have seems to generate a
> huge number of events in a very short period of time, so I think it is the
> cache at work you are seeing.
>
> What happens if you run the program for a longer period of time? It should
> stabilise after 10-15 seconds, when it reaches the highest number of events
> per second
>
> Thanks
> Erik
>
> > On 8 Dec 2020, at 10:16, Gunnar Morling <gunnar at hibernate.org> wrote:
> >
> > Hi,
> >
> > Here's a basic reproducer for the behaviour I observe:
> >
> > cat > RecordingStreamTest.java <<EOF
> > import jdk.jfr.consumer.RecordingStream;
> >
> > public class RecordingStreamTest {
> >
> > public static byte[] bytes;
> >
> > public static void main(String[] args) throws Exception {
> > RecordingStream stream = new RecordingStream();
> > stream.enable("jdk.ObjectAllocationInNewTLAB");
> > stream.enable("jdk.ObjectAllocationOutsideTLAB");
> > stream.onEvent(re -> {});
> > stream.startAsync();
> >
> > for (int i = 1; i <= 100_000_000; i++) {
> > bytes = new byte[1024];
> > }
> >
> > System.gc();
> >
> > for (int i = 1; i <= 100_000_000; i++) {
> > bytes = new byte[1024];
> > }
> >
> > stream.close();
> > stream.awaitTermination();
> >
> > System.gc();
> > }
> > }
> > EOF
> >
> > javac RecordingStreamTest.java
> >
> > java -XX:+HeapDumpAfterFullGC
> >
> -XX:StartFlightRecording=duration=2m,name=MyRecording,filename=myrecording.jfr,settings=default
> > RecordingStreamTest
> >
> > The heap dump after the second full GC contains 500K RecordedEvent
> > instances, and the flight recording shows that the heap size after GC is
> > constantly increasing. Is this a usage error on my side (if so, what
> should
> > I do to prevent this?), or is it actually a memory leak somewhere in the
> > recording stream code?
> >
> > Many thanks for any help,
> >
> > --Gunnar
> >
> >
> > Am Mo., 7. Dez. 2020 um 10:20 Uhr schrieb Gunnar Morling <
> > gunnar at hibernate.org>:
> >>
> >> Hi,
> >>
> >> I'm using a JDK 14 RecordingStream, and I observe an ever-increasing
> heap
> > usage as my application keeps running. The reference chain is
> >>
> >> JFR Event Stream Thread -> jdk.jfr.internal.consumer.ChunkParser ->
> > parsers jdk.jfr.internal.LongMap -> objects java.lang.Object[512] ->
> > jdk.jfr.internal.consumer.EventParser -> cached
> > jdk.jfr.consumer.RecordedEvent[131072]
> >>
> >> I.e. that array "RecordedEvent[] cached" is continuously growing. Is
> this
> > a usage error on my end? I was looking for some sort of flush() or
> similar
> > API on RecordingStream, but can't find such a method.
> >>
> >> Any help would be appreciated.
> >>
> >> Thanks a lot,
> >>
> >> --Gunnar
>
>
More information about the hotspot-jfr-dev
mailing list