Gotchas with avoiding benchmarks.jar

Brian Harris brianfromoregon at gmail.com
Mon Aug 25 18:08:51 UTC 2014


Thanks for the response!

a) looking at handling of jvm args in Runner the safe thing for me is to
always explicitly set OptionsBuilder#jvmArgs (perhaps to an empty set,
even) so that the host JVM args aren't automatically used
.
b) junit allows concurrent execution with custom @RunWith but by default
it's always single threaded, so good point tho out of the box behavior is
safe. to be extra sure, Runner (or our wrapper) could acquire a static lock

c) if jmh always generates these two files, Runner could assert they are
present to prevent this case.


On Mon, Aug 25, 2014 at 1:14 AM, Aleksey Shipilev <
aleksey.shipilev at oracle.com> wrote:

> Hi Brian,
>
> On 08/22/2014 08:36 PM, Brian Harris wrote:
> > I have jmh integrated with our build environment at work now. The
> template
> > benchmark is this
> >
> > public class HelloJmhTest {
> >   @Benchmark public void wellHelloThere() {}
> >
> >   @Test public void run() throws RunnerException {
> >     Options opt = new ...
> >     new Runner(opt).run();
> >   }
> > }
> >
> > That is, same as your samples except replacing psvm with junit @Test.
> Rest
> > assured, these jmh "tests" are segregated from the real tests and they
> are
> > run on a dedicated Jenkins slave which is otherwise completely quiet. I
> > just use junit as the runner hook because it's so damn convenient.
>
> Yes, our jmh-core-it (core integration tests) are built in the same
> manner -- but we care about functionality there, not performance.
>
> > I'm wondering what are the gotchas with not using the benchmarks.jar
> setup
> > that you recommend. The classpath will be huge, will that be a problem?
>
> There are multiple things one should consider while breaking from the
> sweet cradle of self-contained prebuilt JARs:
>
>  a) Isolation. When you have an uberjar, you are most probably running
> it from the separate JVM, which you explicitly control w.r.t. what
> executable you are using, what command line options you are passing, and
> basically where and when the benchmarks are run. In the "exploded"
> configuration you will have to accurately replicate all the pieces that
> are needed to run the benchmark, possibly adding more than actually
> required. In other words, if you are calling Runner from @Test, then
> Runner and forked VM will inherit all the JVM options that were
> (accidentally) supplied to @Test, possibly screwing your results.
>
>  b) Exclusivity. It is very convenient for, say, JUnit @Tests to run
> concurrently, because most well-behaved functional tests tolerate the
> external concurrency. Benchmarks do not tolerate external contenders. It
> is arguably easy to accidentally run multiple @Tests that will
> concurrently call into multiple JMH runners, and they will interfere
> with each others' scores. I know e.g. JMH IDEA plugin stomps hard on
> running multiple JMH sessions at once, can you guarantee the same with
> @Test-s?
>
>  c) Consistency. Uberjar also packs the essential metadata within it.
> First, the benchmark list that enumerates all the benchmarks in you are
> able to run along with their default options. Failure to locate the
> benchmark list is easily detectable (even though people tend to get
> completely catatonic when they face "No benchmarks to run" when their
> build configuration misbehaves). The detectability is *not* the case for
> compiler hints, which you may accidentally omit from the resource
> location, and then things break silently on performance front (e.g.
> @CompilerControl annotations suddenly stop working, the forced inlining
> of JMH infrastructure methods is not working... silently, etc.)
>
> TL;DR: Running without uberjar is probably OK, but truly paranoid people
> should prefer the uberjar to avoid the avoidable accidents.
>
> Thanks,
> -Aleksey.
>
>


More information about the jmh-dev mailing list