Shenandoah Events

Ken Dobson kdobson at redhat.com
Wed Feb 20 16:39:06 UTC 2019


Ah right, those are max and critical just pulled from lower on the results
page where it's labelled that way instead.

I'm just running the standard composite settings I believe.

SPEC_OPTS=""
JAVA_OPTS="-Xmx50G -Xms50G  -XX:+UnlockExperimentalVMOptions
-XX:+UseShenandoahGC -Xlog:gc"
MODE_ARGS=""

In my most recent runs the variance was 12200-13200 for max-jops and
5500-6000 for critical-jops which is certainly tighter but would that be an
acceptable level of variance? Also how many runs would you recommend to get
a value you would be confident in?

Thanks,

Ken Dobson

On Wed, Feb 20, 2019 at 11:21 AM Roman Kennke <rkennke at redhat.com> wrote:

> Ok, sorry, I haven't seen that.
>
> I can't really tell what you are running though. What is those 'max' and
> 'geomean' results? I ususally get max-jops and critical-jops out of
> specjbb2015.
>
> Specjbb has a tendendy to produce fairly wild variance from run to run.
> +15% seems a bit far off though. Can you post the exact settings that
> you're running with?
>
> Thanks,
> Roman
>
> > Hi Roman,
> >
> > In the email above I linked the machine I reserved off of beaker to run
> > them on.
> >
> > Ken Dobson
> >
> > On Wed, Feb 20, 2019 at 10:56 AM Roman Kennke <rkennke at redhat.com
> > <mailto:rkennke at redhat.com>> wrote:
> >
> >     What kind of machine are you running it on? We have observed fairly
> wild
> >     variance on laptops, for example, because of throttling/powersaving
> etc.
> >
> >     Roman
> >
> >     > Hi Aleksey,
> >     >
> >     > I've ran the specJBB a number of times since this email and I'm
> >     unable to
> >     > get any sort of consistency for either case. Any insights as to
> >     why that
> >     > might be?
> >     >
> >     > Thanks,
> >     >
> >     > Ken Dobson
> >     >
> >     > On Wed, Feb 6, 2019 at 3:35 PM Ken Dobson <kdobson at redhat.com
> >     <mailto:kdobson at redhat.com>> wrote:
> >     >
> >     >> Hi all,
> >     >>
> >     >> Some updates regarding testing with the benchmarks.
> >     >>
> >     >> For specJVM after a number of tests I've noticed no significant
> >     >> differences in performance between the tests that are recorded
> >     and the
> >     >> tests that aren't. That being said the specJVM benchmark only
> >     emits ~5000
> >     >> heap region transition events which seems to be 1-2 orders of
> >     magnitude
> >     >> smaller than what I'd expect from a normal process so I don't
> >     think this
> >     >> provides any quality information regarding the performance impact.
> >     >>
> >     >> With SpecJBB the jOPS numbers I've gotten were:
> >     >>
> >     >> WithRecording
> >     >> Max =14096
> >     >> Geomean=5899
> >     >>
> >     >> NoRecording
> >     >> Max=12177
> >     >> Geomean=5737
> >     >>
> >     >> Not sure why the results are the opposite of what would be
> >     expected so any
> >     >> insight would be appreciated. I ran the test on this machine:
> >     >>
> >
> https://beaker.engineering.redhat.com/view/hp-dl785g6-01.rhts.eng.bos.redhat.com#details
> >     >> with -Xmx=50G and -Xms=50G.
> >     >>
> >     >> I can zip up the whole results page if that would be helpful.
> >     >>
> >     >> Thanks,
> >     >>
> >     >> Ken Dobson
> >     >>
> >     >>
> >     >>
> >     >>
> >     >> On Wed, Jan 30, 2019 at 12:03 PM Ken Dobson <kdobson at redhat.com
> >     <mailto:kdobson at redhat.com>> wrote:
> >     >>
> >     >>> Thank you this is great. I don't have the benchmarks no, drop
> them
> >     >>> wherever is easiest for you.
> >     >>>
> >     >>> Thanks,
> >     >>>
> >     >>> Ken
> >     >>>
> >     >>> On Wed, Jan 30, 2019 at 11:28 AM <zgu at redhat.com
> >     <mailto:zgu at redhat.com>> wrote:
> >     >>>
> >     >>>> On Wed, 2019-01-30 at 10:54 -0500, Ken Dobson wrote:
> >     >>>>> Hi Zhengyu,
> >     >>>>>
> >     >>>>> We should still find out the impact when those events are being
> >     >>>>> recorded to ensure it's not too significant. Would you be able
> to
> >     >>>>> instruct me as to how to run the benchmarks so that I can
> >     measure the
> >     >>>>> performance while the JVM is being recorded vs recording
> disabled?
> >     >>>> Okay, we usually run specJVM and specJBB, do you have the
> >     benchmarks?
> >     >>>> If not, where can I drop them?
> >     >>>>
> >     >>>> For specJVM, the commandline I use:
> >     >>>> ${JAVA_HOME}/bin/java -jar jmh-specjvm2016.jar Derby  --jvmArgs
> >     "-Xmx1g
> >     >>>> -Xms1g -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC
> >     ..." -f 3
> >     >>>>
> >     >>>> For specJBB, my script attached.
> >     >>>>
> >     >>>> Thanks,
> >     >>>>
> >     >>>> -Zhengyu
> >     >>>>
> >     >>>>
> >     >>>>>
> >     >>>>> Thanks,
> >     >>>>>
> >     >>>>> Ken
> >     >>>>>
> >     >>>>> On Tue, Jan 29, 2019 at 1:05 PM <zgu at redhat.com
> >     <mailto:zgu at redhat.com>> wrote:
> >     >>>>>> On Tue, 2019-01-29 at 18:25 +0100, Aleksey Shipilev wrote:
> >     >>>>>>> On 1/29/19 6:03 PM, Ken Dobson wrote:
> >     >>>>>>>> Just following up on the possibility of running the
> benchmarks
> >     >>>>>> to
> >     >>>>>>>> measure the performance overhead.
> >     >>>>>>>> Please let me know if this would be possible and what I
> would
> >     >>>>>> have
> >     >>>>>>>> to do to get this done.
> >     >>>>>> I was initially worry about the amount of region state
> transition
> >     >>>>>> events generated. After adding should_commit() guard, I am
> >     now less
> >     >>>>>> concerned.
> >     >>>>>>
> >     >>>>>> Some overheads during recoding time, I think, are expected.
> >     So the
> >     >>>>>> overhead, that we are talking about, is down to the additional
> >     >>>>>> guard
> >     >>>>>> test when recording is off, I doubt it is measurable.
> >     >>>>>>
> >     >>>>>> Thanks,
> >     >>>>>>
> >     >>>>>> -Zhengyu
> >     >>>>>>
> >     >>>>>>
> >     >>>>>>>
> >     >>>>>>> It is possible, and should be as simple as running the
> >     benchmarks
> >     >>>>>>> with/without -XX:+FlightRecorder?
> >     >>>>>>> You are working with Zhengyu on JFR support, right? Zhengyu
> >     knows
> >     >>>>>> how
> >     >>>>>>> to run benchmarks.
> >     >>>>>>>
> >     >>>>>>>> On Thu, Jan 24, 2019 at 11:28 AM Ken Dobson
> >     <kdobson at redhat.com <mailto:kdobson at redhat.com>
> >     >>>>>>>> <mailto:kdobson at redhat.com <mailto:kdobson at redhat.com>>>
> wrote:
> >     >>>>>>>>     The G1 version currently intercepts individual
> transitions
> >     >>>>>> so
> >     >>>>>>>> I'd hope they've measured the
> >     >>>>>>>>     overhead and found it was acceptable but can't be
> certain
> >     >>>>>> of
> >     >>>>>>>> that. Yes I agree that's definitely
> >     >>>>>>>>     the first step. Generally the default JFR profiling
> >     >>>>>>>> configuration is ~2% overhead but detailed
> >     >>>>>>>>     events such as these are not enabled in that
> configuration.
> >     >>>>>>>> When using these events I think it
> >     >>>>>>>>     would be best to disable all the default events and only
> >     >>>>>> enable
> >     >>>>>>>> the two Shenandoah Events to
> >     >>>>>>>>     reduce the overhead. If you think measuring the
> benchmarks
> >     >>>>>> is
> >     >>>>>>>> the best way to get this data I'd
> >     >>>>>>>>     be happy to do this if you can point me in the right
> >     >>>>>> direction.
> >     >>>>>>>
> >     >>>>>>> The first rule of benchmarking is not assuming anything,
> >     >>>>>> including
> >     >>>>>>> that someone else did them,
> >     >>>>>>> especially for a different implementation.
> >     >>>>>>>
> >     >>>>>>> There is also a bigger question: how much additional latency
> >     this
> >     >>>>>>> brings to Shenandoah (tiny)
> >     >>>>>>> pauses, when there are lots of transitions happen? Shenandoah
> >     >>>>>> logs
> >     >>>>>>> times with -Xlog:gc, and summary
> >     >>>>>>> times with -Xlog:gc+stats.
> >     >>>>>>>
> >     >>>>>>> -Aleksey
> >     >>>
> >     >>>
> >
>
>


More information about the shenandoah-dev mailing list