Setup time is counted against iteration time in Mode.SingleShotTime (JMH-1.0)

vyazelenko at yahoo.com vyazelenko at yahoo.com
Tue Sep 2 14:16:20 UTC 2014


Ok, thanks for expanding on harness control thread and timeouts. I agree with all being said but still consider behavior of SingleShotTime magic.

Currently SingleShotTime is the only mode that seems to logically fit into description "timeout-agnostic". Up until now I thought that this mode should be used for kilo- and mega- benches but it turns out it's not the case (at least not in the default configuration). So it seems support for such benches is missing or should be made more explicit.

How about adding another command line switch -waitForever similar to -foe and -gc. At least it would allow altering behavior for all benches at once. ;)

Sent from my iPhone

> On Sep 2, 2014, at 15:32, Aleksey Shipilev <aleksey.shipilev at oracle.com> wrote:
> 
> On 09/02/2014 05:18 PM, vyazelenko at yahoo.com wrote:
>>> Yes, there is the implementation reason for that: @Setup is executed by
>>> the generated code, and so the harness control thread is oblivious of
>>> the time spent in @Setup vs. @Benchmark.
>> I have a problem with that. It is neither intuitive nor expected
>> that @Setup/@TearDown times are considered part of actual measurement
>> Especially if Level.Trial is used.
> 
> @Setup/@TearDown-s are not considered the part of "measurement". @Setup
> methods are not measured in any case, therefore your example about the
> heavy-weight- at Setup kilo-benchmark is not applicable.
> 
> My point is about harness control thread, which guards the timeout, and
> forcefully terminates the payload if it had timed out. I see no
> particular reason for *control* thread to track the @Benchmark time in
> isolation: once you about to time out with a large timeout, you already
> have a problem :)
> 
>>> Anyway, that is arguably not-the-issue, here is why. The timeout-ing is
>>> done to recover from the misbehaved benchmarks, and the @Setup taking 10
>>> minutes to run is arguably misbehaving. We can't afford waiting
>>> indefinitely for broken benchmarks if we are running the large benchmark
>>> suites. Now, when we are facing the misbehaving workload, it does not
>>> seem all that necessary to accurately track the workload time?
>> Well if Setup is not considered part of the workload then there would no problem to start with.
> 
> See above, not an issue.
> 
>> I think now with 1.0 released such change can be considered. 
>> Also waiting indefinitely for @Setup/@TearDown and maybe 
>> even benchmark proper should be considered.
> 
> Maybe. But insofar, there is no strong evidence it is required. If you
> have a long-running benchmark with large @Setup/@TearDown/@Benchmark,
> you can set the larger "timeout" with -r (or it's equivalent in API and
> annotations). Users will still have to do this step, and the property
> "-Djmh.waitForever=true" is not really different from "-r 10d(ays)".
> 
> 
>>> And what would be the best place to document this? I am puzzled about
>>> that, and since the issue is about the corner case, it does not seem to
>>> belong in the generic documentation.
>> Why not document it in "time" property of the @Warmup/@Measurement annotations? Saying something like:
>> "If time is undefined then depending on the BenchmarkMode JMH will:
>>  - wait at least 5 seconds...
>>  - wait for 10 minutes in case SingleShotTime...
>> To override defaults please put time value explicitly..."
> 
> Alas, does not work that easily: there are also Java API-ish Options,
> command line interface, etc.
> 
> Thanks,
> -Aleksey.
> 


More information about the jmh-dev mailing list