Shenandoah Events
Ken Dobson
kdobson at redhat.com
Wed Jan 30 17:03:15 UTC 2019
Thank you this is great. I don't have the benchmarks no, drop them wherever
is easiest for you.
Thanks,
Ken
On Wed, Jan 30, 2019 at 11:28 AM <zgu at redhat.com> wrote:
> On Wed, 2019-01-30 at 10:54 -0500, Ken Dobson wrote:
> > Hi Zhengyu,
> >
> > We should still find out the impact when those events are being
> > recorded to ensure it's not too significant. Would you be able to
> > instruct me as to how to run the benchmarks so that I can measure the
> > performance while the JVM is being recorded vs recording disabled?
> Okay, we usually run specJVM and specJBB, do you have the benchmarks?
> If not, where can I drop them?
>
> For specJVM, the commandline I use:
> ${JAVA_HOME}/bin/java -jar jmh-specjvm2016.jar Derby --jvmArgs "-Xmx1g
> -Xms1g -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC ..." -f 3
>
> For specJBB, my script attached.
>
> Thanks,
>
> -Zhengyu
>
>
> >
> > Thanks,
> >
> > Ken
> >
> > On Tue, Jan 29, 2019 at 1:05 PM <zgu at redhat.com> wrote:
> > > On Tue, 2019-01-29 at 18:25 +0100, Aleksey Shipilev wrote:
> > > > On 1/29/19 6:03 PM, Ken Dobson wrote:
> > > > > Just following up on the possibility of running the benchmarks
> > > to
> > > > > measure the performance overhead.
> > > > > Please let me know if this would be possible and what I would
> > > have
> > > > > to do to get this done.
> > > I was initially worry about the amount of region state transition
> > > events generated. After adding should_commit() guard, I am now less
> > > concerned.
> > >
> > > Some overheads during recoding time, I think, are expected. So the
> > > overhead, that we are talking about, is down to the additional
> > > guard
> > > test when recording is off, I doubt it is measurable.
> > >
> > > Thanks,
> > >
> > > -Zhengyu
> > >
> > >
> > > >
> > > > It is possible, and should be as simple as running the benchmarks
> > > > with/without -XX:+FlightRecorder?
> > > > You are working with Zhengyu on JFR support, right? Zhengyu knows
> > > how
> > > > to run benchmarks.
> > > >
> > > > > On Thu, Jan 24, 2019 at 11:28 AM Ken Dobson <kdobson at redhat.com
> > > > > <mailto:kdobson at redhat.com>> wrote:
> > > > > The G1 version currently intercepts individual transitions
> > > so
> > > > > I'd hope they've measured the
> > > > > overhead and found it was acceptable but can't be certain
> > > of
> > > > > that. Yes I agree that's definitely
> > > > > the first step. Generally the default JFR profiling
> > > > > configuration is ~2% overhead but detailed
> > > > > events such as these are not enabled in that configuration.
> > > > > When using these events I think it
> > > > > would be best to disable all the default events and only
> > > enable
> > > > > the two Shenandoah Events to
> > > > > reduce the overhead. If you think measuring the benchmarks
> > > is
> > > > > the best way to get this data I'd
> > > > > be happy to do this if you can point me in the right
> > > direction.
> > > >
> > > > The first rule of benchmarking is not assuming anything,
> > > including
> > > > that someone else did them,
> > > > especially for a different implementation.
> > > >
> > > > There is also a bigger question: how much additional latency this
> > > > brings to Shenandoah (tiny)
> > > > pauses, when there are lots of transitions happen? Shenandoah
> > > logs
> > > > times with -Xlog:gc, and summary
> > > > times with -Xlog:gc+stats.
> > > >
> > > > -Aleksey
More information about the shenandoah-dev
mailing list