From aleksey.shipilev at oracle.com Fri Apr 1 19:06:00 2016 From: aleksey.shipilev at oracle.com (aleksey.shipilev at oracle.com) Date: Fri, 01 Apr 2016 19:06:00 +0000 Subject: hg: code-tools/jmh: 3 new changesets Message-ID: <201604011906.u31J60nD006962@aojmv0008.oracle.com> Changeset: eb41bdab96cf Author: shade Date: 2016-04-01 10:01 +0300 URL: http://hg.openjdk.java.net/code-tools/jmh/rev/eb41bdab96cf JMH v1.12. ! jmh-archetypes/jmh-groovy-benchmark-archetype/pom.xml ! jmh-archetypes/jmh-java-benchmark-archetype/pom.xml ! jmh-archetypes/jmh-kotlin-benchmark-archetype/pom.xml ! jmh-archetypes/jmh-scala-benchmark-archetype/pom.xml ! jmh-archetypes/pom.xml ! jmh-core-benchmarks/pom.xml ! jmh-core-ct/pom.xml ! jmh-core-it/pom.xml ! jmh-core/pom.xml ! jmh-generator-annprocess/pom.xml ! jmh-generator-asm/pom.xml ! jmh-generator-bytecode/pom.xml ! jmh-generator-reflection/pom.xml ! jmh-samples/pom.xml ! pom.xml Changeset: b69af3f4d568 Author: shade Date: 2016-04-01 10:01 +0300 URL: http://hg.openjdk.java.net/code-tools/jmh/rev/b69af3f4d568 Added tag 1.12 for changeset eb41bdab96cf ! .hgtags Changeset: 39ed8b3c11ce Author: shade Date: 2016-04-01 10:01 +0300 URL: http://hg.openjdk.java.net/code-tools/jmh/rev/39ed8b3c11ce Continue in 1.13-SNAPSHOT. ! jmh-archetypes/jmh-groovy-benchmark-archetype/pom.xml ! jmh-archetypes/jmh-java-benchmark-archetype/pom.xml ! jmh-archetypes/jmh-kotlin-benchmark-archetype/pom.xml ! jmh-archetypes/jmh-scala-benchmark-archetype/pom.xml ! jmh-archetypes/pom.xml ! jmh-core-benchmarks/pom.xml ! jmh-core-ct/pom.xml ! jmh-core-it/pom.xml ! jmh-core/pom.xml ! jmh-generator-annprocess/pom.xml ! jmh-generator-asm/pom.xml ! jmh-generator-bytecode/pom.xml ! jmh-generator-reflection/pom.xml ! jmh-samples/pom.xml ! pom.xml From aleksey.shipilev at oracle.com Fri Apr 1 19:06:07 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Fri, 1 Apr 2016 22:06:07 +0300 Subject: JMH 1.12 Message-ID: <56FEC69F.6090601@oracle.com> Hi, JMH 1.12 patch release is available at Maven Central (props to Evgeny Mandrikov, as usual). It includes several important improvements, mostly compatibility with JDK 9 EA that has Jigsaw integrated -- the first build with Jigsaw is 9b111: * Compiling with 9b111 fails with CNFE: javax.annotation.Generated. This is arguably a JDK issue, but we have worked that around nevertheless (interested parties can see the dependent bug): https://bugs.openjdk.java.net/browse/CODETOOLS-7901643 * GC profiler would fail on 9b111, which enforces stricter access controls to MXBeans. We have rewired the code to poll MXBeans safer -- and the added benefit was fixing a few JDK 6 issues too: https://bugs.openjdk.java.net/browse/CODETOOLS-7901645 * JSON output now emits warmup/measurement batch sizes, in case you need that data for your SingleShot runs: https://bugs.openjdk.java.net/browse/CODETOOLS-7901649 * For a while now, doing non-forked runs (-f 0) was risky correctness-wise. It was left as power-user/debugging facility. Now we warn harder about this during the run: https://bugs.openjdk.java.net/browse/CODETOOLS-7901650 Enjoy! -Aleksey From j.kinable at gmail.com Sun Apr 10 15:41:54 2016 From: j.kinable at gmail.com (J Kinable) Date: Sun, 10 Apr 2016 11:41:54 -0400 Subject: JMH configuration and result interpretation Message-ID: Dear, I recently encountered JMH and I wanted to give it a try. I've a few specific questions about how to configure JMH and how to interpret the result. 1. Through the parameter 'measurementIterations' I can specify how often a benchmark is repeated. But what is the function of the parameter 'measurementTime'? Let's say I specify measurementTime=1 second and I create an @benchmark function which normally takes 10 minutes to complete. Is this function interrupted/killed after 1 second? The javadoc description of measurementTime "Time of each measurement iteration" isn't very descriptive". 2. Let's assume I run a benchmark with "AverageTime" mode. For each benchmark I get a score expressed in ms/op. How should I interpret this? What is "op" in this case? I would have expected: "average time to finish the benchmark completely". thanks, Joris Kinable From j.kinable at gmail.com Sun Apr 10 15:53:28 2016 From: j.kinable at gmail.com (J Kinable) Date: Sun, 10 Apr 2016 11:53:28 -0400 Subject: how to randomize test in JMH Message-ID: I've got 2 algorithms which I would like to compare. An algorithm takes a data instance as input and produces a certain result. To do a fair but thorough comparison, I would like to run my algorithms on several different data instances. Obviously, to make it a fair comparison, both algorithms should use the same pool of data instances. How would you create such a test. I could do the following: @Benchmark public void testAlgorithm1(){ for(Instance instance : instances) algorithm1.run(instance); } @Benchmark public void testAlgorithm2(){ for(Instance instance : instances) algorithm2.run(instance); } Does this make sense, or would you use a different approach, e.g. use some of the Setup parameters/annotations? thanks, Joris From nitsanw at yahoo.com Sun Apr 10 16:51:10 2016 From: nitsanw at yahoo.com (Nitsan Wakart) Date: Sun, 10 Apr 2016 18:51:10 +0200 Subject: how to randomize test in JMH In-Reply-To: References: Message-ID: Having the loop in place offers the compiler 'unfair' optimisation opportunities(loop unrolling, hoisting etc. see samples for loops risks). I tend to allocate a range of data to work on and loop through it, processing one element at a time. The variance between data points should average out. If the work under measurement is small you might want to use a data set size that is a power of 2 so that you can avoid using '%' and use '&' instead. I would add the data point selection method as benchmark to give you some idea of its overhead. > On 10 Apr 2016, at 17:53, J Kinable wrote: > > I've got 2 algorithms which I would like to compare. An algorithm takes a > data instance as input and produces a certain result. To do a fair but > thorough comparison, I would like to run my algorithms on several different > data instances. Obviously, to make it a fair comparison, both algorithms > should use the same pool of data instances. How would you create such a > test. I could do the following: > > @Benchmark > public void testAlgorithm1(){ > for(Instance instance : instances) algorithm1.run(instance); > } > > @Benchmark > public void testAlgorithm2(){ > for(Instance instance : instances) algorithm2.run(instance); > } > > Does this make sense, or would you use a different approach, e.g. use some > of the Setup parameters/annotations? > > thanks, > > Joris From aleksey.shipilev at oracle.com Mon Apr 11 09:12:39 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Mon, 11 Apr 2016 12:12:39 +0300 Subject: JMH configuration and result interpretation In-Reply-To: References: Message-ID: <570B6A87.1090304@oracle.com> On 04/10/2016 06:41 PM, J Kinable wrote: > I recently encountered JMH and I wanted to give it a try. I've a few > specific questions about how to configure JMH and how to interpret the > result. > > 1. Through the parameter 'measurementIterations' I can specify how often a > benchmark is repeated. But what is the function of the parameter > 'measurementTime'? Let's say I specify measurementTime=1 second and I > create an @benchmark function which normally takes 10 minutes to complete. > Is this function interrupted/killed after 1 second? The javadoc description > of measurementTime "Time of each measurement iteration" isn't very > descriptive". > > 2. Let's assume I run a benchmark with "AverageTime" mode. For each > benchmark I get a score expressed in ms/op. How should I interpret this? > What is "op" in this case? I would have expected: "average time to finish > the benchmark completely". Answers to both questions would probably be obvious, once you understand that "op" is one @Benchmark call. JMH is not supposed to be run with very fat @Benchmark-s. Instead, it will run the small @Benchmark continuously within the warmup/measurement iteration time. -Aleksey From nitsanw at yahoo.com Thu Apr 14 14:17:52 2016 From: nitsanw at yahoo.com (Nitsan Wakart) Date: Thu, 14 Apr 2016 14:17:52 +0000 (UTC) Subject: perfasm method attribution assumes class::method signature is unique References: <1735073844.404894.1460643472140.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <1735073844.404894.1460643472140.JavaMail.yahoo@mail.yahoo.com> I'm seeing a benchmark where the code under test is using Javaassit to generate a reporting class. The class is generated in different call sites, with a different actual class and method but the same class name.Though the code under test is terrible and deserves what's coming to it, it is also demonstrating the issue. I cannot share the benchmark because client code, but if needed I could reproduce the issue (load same named class under different classloaders).I'm not sure how best JMH should report this situation, but attributing all he cost by name seems erroneous.Thanks,Nitsan From Sebastian.Millies at softwareag.com Fri Apr 15 14:34:29 2016 From: Sebastian.Millies at softwareag.com (Millies, Sebastian) Date: Fri, 15 Apr 2016 14:34:29 +0000 Subject: Classpath of uber.jar does not contain all dependencies (Fwd: Content filtered message notification) In-Reply-To: <5710B535.9030908@oracle.com> References: <5710B535.9030908@oracle.com> Message-ID: <32F15738E8E5524DA4F01A0FA4A8E4900102E4E016@HQMBX5.eur.ad.sag> Thanks for your help. I have managed to include the contents of external jars without installing them in a local repo with the non-maven-jar-maven-plugin That is the part of this post that may be useful to others. The rest is SAP-specific rant. I learned that in my concrete case I cannot do what I did, because SAP forbids it: Both renaming and repackaging JCo are against the license. (cf. item Q7 on https://wiki.scn.sap.com/wiki/display/ASJAVA/SAP+JCo+FAQs) For the same reason, installing an artefact in a local repo would not work anyway, because the dependency is useless: JCo will only run from a jar that does not conform to the Maven naming conventions. I don?t understand why SAP are making it intentionally difficult to work with a quasi-standard like Maven, on which other standards (like JMH) are built. Well, perhaps I?ll just unpack the uber.jar to a folder again, copy JCo into the directory and run from there. Question closed. Maybe I?ll complain to SAP. n Sebastian -----Original Message----- From: Aleksey Shipilev [mailto:aleksey.shipilev at oracle.com] Sent: Friday, April 15, 2016 11:33 AM To: Millies, Sebastian Subject: Classpath of uber.jar does not contain all dependencies (Fwd: Content filtered message notification) * PGP Signed by an unknown key Hi, Millies, Please note your email was filtered out for some reason :( > Question 2 still leaves me helpless: How can I include a jar that I > absolutely must not push into any Maven repo? You can always deploy the JAR locally in your local cache: http://maven.apache.org/general.html#importing-jars ...or, use system, like this: http://maven.apache.org/general.html#tools-jar-dependency I don't think runnable JARs produced by JMH are attached as project artifacts, and therefore not pushed into a Maven repo with those dependencies onboard. See e.g. jmh-samples: http://central.maven.org/maven2/org/openjdk/jmh/jmh-samples/1.12/ ...which does not contain even the JMH dependency itself. Thanks, -Aleksey -------- Forwarded Message -------- Subject: FW: Classpath of uber.jar does not contain all dependencies From: "Millies, Sebastian" > Date: 04/15/2016 12:16 PM To: "jmh-dev at openjdk.java.net" > Question 2 still leaves me helpless: How can I include a jar that I absolutely must not push into any Maven repo? * Unknown Key * 0x62A119A7 Software AG ? Sitz/Registered office: Uhlandstra?e 12, 64297 Darmstadt, Germany ? Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com From jw_list at headissue.com Mon Apr 18 16:14:13 2016 From: jw_list at headissue.com (Jens Wilke) Date: Mon, 18 Apr 2016 18:14:13 +0200 Subject: Introduction Jens Wilke Message-ID: <5719180.VqxOQXmEHr@tapsy> Hello Everyone! After lurking around for a while, it is time to say hello. I am the author of cache2k, a high performance Java caching library (http://cache2k.org). Recently I started digging into JMH to build a new benchmark suite for caching libraries. The first outcome is here: http://cruftex.net/2016/03/16/Java-Caching-Benchmarks-2016-Part-1.html For those curious: The graphs are done with jq, awk, bash and gnuplot. Sources are here: https://github.com/cache2k/cache2k-benchmark I am based in Munich, love Skiing, hmm, what ever... Cheers, Jens -- "Everything superfluous is wrong!" // Jens Wilke - headissue GmbH - Germany \// https://headissue.com From jw_list at headissue.com Mon Apr 18 16:15:14 2016 From: jw_list at headissue.com (Jens Wilke) Date: Mon, 18 Apr 2016 18:15:14 +0200 Subject: GCProfiler enhancement ideas Message-ID: <1513702.sUPxgXb61N@tapsy> Hi, some metrics I would like to see (and do) in the GCProfiler: # maximumUsedAfterGc The maximum used heap size after a GC run over all GC events. That's not particular interesting for microbenchmarks, but for benchmarking libraries I'd like to have some rough metrics about the absolute memory consumption. It gives useful information, here is a protoype: https://github.com/cache2k/cache2k-benchmark/blob/master/jmh-suite/src/main/java/org/cache2k/benchmark/jmh/GcProfiler.java # gc.churnSum (or gc.churn.total?) Currently the churn is added by space. I think the space names can be different depending on the configuration. What about adding the total as a result metric that has a constant name? This way we don't have to change the analysis after a GC configuration change. # gc.count. Here it's the other way around. Currently, there is only a total count of all GC events. What about adding separate counts per GC type (if that is reported meaningful via the notification)? Thoughts? Should I prep a patch? (OCA is signed) Cheers, Jens -- "Everything superfluous is wrong!" // Jens Wilke - headissue GmbH - Germany \// https://headissue.com From jw_list at headissue.com Mon Apr 18 16:22:30 2016 From: jw_list at headissue.com (Jens Wilke) Date: Mon, 18 Apr 2016 18:22:30 +0200 Subject: Prefix character at secondary results? Message-ID: <1695340.7jZxqUCYVr@tapsy> Hi, all secondary results have a prefix character defined in Defaults: public static final String PREFIX = "\u00b7"; This character was giving me quite a hard time, when trying to construct queries on the JSON output. What is the reason for that? It would make live easier, if the JSON field names just contain ASCII. Cheers, Jens -- "Everything superfluous is wrong!" // Jens Wilke - headissue GmbH - Germany \// https://headissue.com From aleksey.shipilev at oracle.com Wed Apr 27 09:06:24 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 27 Apr 2016 12:06:24 +0300 Subject: GCProfiler enhancement ideas In-Reply-To: <1513702.sUPxgXb61N@tapsy> References: <1513702.sUPxgXb61N@tapsy> Message-ID: <57208110.4050604@oracle.com> Hi, Sorry for the late reply. On 04/18/2016 07:15 PM, Jens Wilke wrote: > # maximumUsedAfterGc > > The maximum used heap size after a GC run over all GC events. That's not particular interesting for > microbenchmarks, but for benchmarking libraries I'd like to have some rough metrics about the absolute > memory consumption. It gives useful information, here is a protoype: > https://github.com/cache2k/cache2k-benchmark/blob/master/jmh-suite/src/main/java/org/cache2k/benchmark/jmh/GcProfiler.java Okay, that makes sense. The question is to how to aggregate these metrics, taking transient surges into the account. Especially the surges during the young collections? Probably deserves to be split per-generation too. > # gc.churnSum (or gc.churn.total?) gc.churn.total > Currently the churn is added by space. I think the space names can be different depending on the configuration. > What about adding the total as a result metric that has a constant name? This way we don't have to change the > analysis after a GC configuration change. Yes, makes sense. > # gc.count. > > Here it's the other way around. Currently, there is only a total count of all GC events. What about adding separate > counts per GC type (if that is reported meaningful via the notification)? You will have to drop GarbageCollectorMXBean scalar values then, and re-use the GC notifications directly for this. GCProfiler already does this, so no problem. > Thoughts? Should I prep a patch? (OCA is signed) Yes, let's see what you got. I think these improvements would force us to beef up GCProfiler with options support to disable some metrics by default. These are the metrics that should probably be available by default: gc.churn.total.norm, gc.alloc.total.norm, gc.count.total -- and others should be easy to enable. See other profilers, e.g. StackProfiler to see how they treat Options. Thanks, -Aleksey From aleksey.shipilev at oracle.com Wed Apr 27 09:33:02 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 27 Apr 2016 12:33:02 +0300 Subject: Prefix character at secondary results? In-Reply-To: <1695340.7jZxqUCYVr@tapsy> References: <1695340.7jZxqUCYVr@tapsy> Message-ID: <5720874E.10104@oracle.com> On 04/18/2016 07:22 PM, Jens Wilke wrote: > all secondary results have a prefix character defined in Defaults: > > public static final String PREFIX = "\u00b7"; > > This character was giving me quite a hard time, when trying to construct queries on the > JSON output. > > What is the reason for that? So, the underlying reason is this: https://bugs.openjdk.java.net/browse/CODETOOLS-7901367 In text report, all results are sorted by label lexicographically. Profiler results should come after the "benchmark" secondary results, which forces us to use some symbol that comes after any 7-bit ASCII symbol. This brings us to extended ASCII table, and there, 0xB7 (dot) is an obvious choice, given that we are using non-ASCII characters in the output anyway. > It would make live easier, if the JSON field names just contain ASCII. Well, nothing really prevents us to take a step back to basic ASCII table, and use Defaults.PREFIX = "~". I wonder what others are thinking about this. Cheers, -Aleksey From Sebastian.Millies at softwareag.com Wed Apr 27 09:53:47 2016 From: Sebastian.Millies at softwareag.com (Millies, Sebastian) Date: Wed, 27 Apr 2016 09:53:47 +0000 Subject: Instructions for setting up multithreaded tests? Message-ID: <32F15738E8E5524DA4F01A0FA4A8E4900102E589B5@HQMBX5.eur.ad.sag> Hi there, not sure if I am doing this right. I want a test that has both global state (shared across benchmark methods) and local ( new for each iteration). I want the test to be multi-threaded, so that each iteration is executed by multiple threads. Thus I will get contention on the local state within each iteration. Here?s an example. Is this the right way to do it? Are there any docs/instructions beyond the samples included with JMH? I am having particular trouble understanding the interaction between the setup levels and the scopes. n Sebastian @BenchmarkMode({ Mode.AverageTime }) @OutputTimeUnit(TimeUnit.MILLISECONDS) @State(Scope.Benchmark) @Warmup(iterations = 10, time = 1000, timeUnit = TimeUnit.MILLISECONDS) @Measurement(iterations = 20, time = 1000, timeUnit = TimeUnit.MILLISECONDS) @Fork(1) public class ConcurrentBenchmark { private Object globalState; // want to set this up once for each benchmark method private ConcurrentHashMap contended; // want a new map for each iteration of a benchmark method @Setup(Level.Trial) public void setUpGlobal() throws InterruptedException { globalState = new Object(); } @Setup(Level.Iteration) public void setUpLocal() throws InterruptedException { contended = new ConcurrentHashMap<>(); } // want contention on CHM in each iteration between all threads executing this method, but independence from test2 @Benchmark @Threads(2) public Object test1() { return contended.putIfAbsent("x", 1); } // want contention on CHM in each iteration between all threads executing this method, but independence from test1 @Benchmark @Threads(2) public Object test2() { return contended.putIfAbsent("y", 2); } public static void main(String[] args) throws RunnerException { Locale.setDefault(Locale.ENGLISH); Options opt = new OptionsBuilder().verbosity(VerboseMode.NORMAL) .include(".*" + ConcurrentBenchmark.class.getSimpleName() + ".*").build(); new Runner(opt).run(); } } Software AG ? Sitz/Registered office: Uhlandstra?e 12, 64297 Darmstadt, Germany ? Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com From aleksey.shipilev at oracle.com Wed Apr 27 10:02:24 2016 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 27 Apr 2016 13:02:24 +0300 Subject: Instructions for setting up multithreaded tests? In-Reply-To: <32F15738E8E5524DA4F01A0FA4A8E4900102E589B5@HQMBX5.eur.ad.sag> References: <32F15738E8E5524DA4F01A0FA4A8E4900102E589B5@HQMBX5.eur.ad.sag> Message-ID: <57208E30.6000203@oracle.com> On 04/27/2016 12:53 PM, Millies, Sebastian wrote: > not sure if I am doing this right. I want a test that has both global > state (shared across benchmark methods) and local ( new for each > iteration). I want the test to be multi-threaded, so that each > iteration is executed by multiple threads. Thus I will get contention > on the local state within each iteration. What's the problem for constructing two state objects, one with Scope.Benchmark where the shared data is put, and one with Scope.Thread, where the local data resides? http://hg.openjdk.java.net/code-tools/jmh/file/39ed8b3c11ce/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_03_States.java You can reference *both* states in your @Benchmark arguments. You can even make these states dependent on each other, in case local threads have to work on the subset of the shared state, as JMHSample_29_StatesDAG explains: http://hg.openjdk.java.net/code-tools/jmh/file/39ed8b3c11ce/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_29_StatesDAG.java > I am having particular trouble understanding the interaction between > the setup levels and the scopes. They are pretty much orthogonal? Setup level tells when to run @Setup/@TearDown. Scope defines what threads have access to state object, and, by extension, what threads are eligible to run @Setup/@TearDown on them. -Aleksey From jw_list at headissue.com Wed Apr 27 11:06:39 2016 From: jw_list at headissue.com (Jens Wilke) Date: Wed, 27 Apr 2016 13:06:39 +0200 Subject: Prefix character at secondary results? In-Reply-To: <5720874E.10104@oracle.com> References: <1695340.7jZxqUCYVr@tapsy> <5720874E.10104@oracle.com> Message-ID: <2534619.4mRj2IJnEK@tapsy> Aleksey, On Wednesday 27 April 2016 12:33:02 Aleksey Shipilev wrote: > So, the underlying reason is this: > https://bugs.openjdk.java.net/browse/CODETOOLS-7901367 > > In text report, all results are sorted by label lexicographically. > Profiler results should come after the "benchmark" secondary results, > which forces us to use some symbol that comes after any 7-bit ASCII > symbol. This brings us to extended ASCII table, and there, 0xB7 (dot) is > an obvious choice, given that we are using non-ASCII characters in the > output anyway. > > > It would make live easier, if the JSON field names just contain ASCII. > > Well, nothing really prevents us to take a step back to basic ASCII > table, and use Defaults.PREFIX = "~". I wonder what others are thinking > about this. I have not a good feeling with special characters in a name. Although it should be always possible to escape them, if they have another meaning in the tool of your choice, it is always a pain... The main problem seems to be that the same name space is used by two separate entities: The JMH profilers and the benchmark creators. So the obvious, "perfect" solution would be to give the profiler results their own bucket. I think switching to '~' isn't really an improvement that justifies the incompatible change and the perfect solution can wait. Thanks for the background information on this! Cheers, Jens -- "Everything superfluous is wrong!" // Jens Wilke - headissue GmbH - Germany \// https://headissue.com From joel.moberg at gmail.com Fri Apr 29 16:35:37 2016 From: joel.moberg at gmail.com (Joel Moberg) Date: Fri, 29 Apr 2016 18:35:37 +0200 Subject: Parametrise @Threads Message-ID: I googled the list to find a way to parametrise the number of threads used for a benchmark. This question was asked before here: http://mail.openjdk.java.net/pipermail/jmh-dev/2014-October/001423.html. There is a link to a issue describing that @Threads could accept parameters. The suggestion is to either let some annotations accept a sequence, or let users be able parametrize with @Params annotation. I think the first suggestion is better, and maybe easier to implement (if annotations can extend a base annotation?). I have not written any annotation code yet. But I want this feature, and want to know if there still is interest. Is there any way I can help?