throughput vs latency, what to look at?

Maneesh Bhunwal maneesh.bhunwal at gmail.com
Thu Apr 15 15:47:33 UTC 2021


Thanks Aleksey for your reply.

I was using @BenchmarkMode(Mode.All) hence it was reporting both the
things throughput
and average time (p95,99,999,999).

Is it "IMO", or have you actually profiled the workload to see there is no
funky locking involved?

If you want more help, you need to show us a MCVE.
>>> I will get back on this.

Regards
Maneesh Bhunwal




On Thu, 15 Apr 2021 at 20:47, Aleksey Shipilev <shade at redhat.com> wrote:

> On 4/15/21 5:05 PM, Maneesh Bhunwal wrote:
> > I have a generic question and a specific question. can you please help me
> > with this?
> >
> > JMH reports 2 gives out metrics throughput (ops/ms) and latency (ms/ops).
>
> Note that JMH has two _benchmark modes_, not metrics: throughput and
> average time (not latency).
>
> For example, the ways the results are aggregated depend on the benchmark
> mode. The average times are
> averaged across measurement threads, while throughput is summed up. This
> way, if you perform the
> ideal test with different number of threads, then average time is always C
> sec/op, and throughput is
> C*N ops/sec, where C is workload constant and N is the number of threads.
>
> > When we are looking at results, which metric should we focus on? Is there
> > any general guideline?
>
> The general guideline is: it depends. This is why these two benchmark
> modes exist.
>
> > I am not able to understand how to interpret the results. IMO if there
> are
> > no locks and similar latency are there, throughput should be the same.
> So I
> > am confused.
>
> Is it "IMO", or have you actually profiled the workload to see there is no
> funky locking involved?
>
> If you want more help, you need to show us a MCVE.
>
> --
> Thanks,
> -Aleksey
>
>


More information about the jmh-dev mailing list