[External] : Re: jstack, profilers and other tools

Alex Otenko oleksandr.otenko at gmail.com
Thu Jul 28 23:22:18 UTC 2022


Or, putting it in yet another way, you need support not for the average
number of requests in the system, but for the maximum number of requests in
the system.

In a system with finite thread count this translates into support of
arbitrarily long request queues (which don't depend on request rate, too).
In a thread-per-request system it necessarily is arbitrarily large number
of threads. (Of course, this is only a model of some ideal system)

Thank you for bearing with me.

Alex

On Fri, 29 Jul 2022, 00:11 Alex Otenko, <oleksandr.otenko at gmail.com> wrote:

> Thanks.
>
> That works under _assumption_ that the time stays constant, which is also
> a statement made in the JEP.
>
> But we know that with thread count growing the time goes down. So one
> needs more reasoning to explain why T goes up - and at the same time keep
> the assumption that time doesn't go down.
>
> In addition, it makes sense to talk of thread counts when that presents a
> limit of some sort - eg if N threads are busy and one more request arrives,
> the request waits for a thread to become available - we have a system with
> N threads. Thread-per-request does not have that limit: for any number of
> threads already busy, if one more request arrives, it still gets a thread.
>
> My study concludes that thread-per-request is the case of infinite number
> of threads (from mathematical point of view). In this case talking about T
> and its dependency on request rate is meaningless.
>
> Poisson distribution of requests means that for any request rate there is
> a non-zero probability for any number of requests in the system - ergo, the
> number of threads. Connecting this number to a rate is meaningless.
>
> Basically, you need to support "a high number of threads" for any request
> rate, not just a very high request rate.
>
> On Thu, 28 Jul 2022, 23:35 Daniel Avery, <danielaveryj at gmail.com> wrote:
>
>> Maybe this is not helpful, but it is how I understood the JEP
>>
>>
>> This is Little’s Law:
>>
>>
>> L = λW
>>
>>
>> Where
>>
>> - L is the average number of requests being processed by a stationary
>> system (aka concurrency)
>>
>> - λ is the average arrival rate of requests (aka throughput)
>>
>> - W is the average time to process a request (aka latency)
>>
>>
>> This is a thread-per-request system:
>>
>>
>> T = L
>>
>>
>> Where
>>
>> - T is the average number of threads
>>
>> - L is the average number of requests (same L as in Little’s Law)
>>
>>
>> Therefore,
>>
>>
>> T = L = λW
>>
>> T = λW
>>
>>
>> Prior to loom, memory footprint of (platform) threads gives a bound on
>> thread count:
>>
>>
>> T <= ~1000
>>
>>
>> After loom, reduced memory footprint of (virtual) threads gives a relaxed
>> bound on thread count:
>>
>>
>> T <= ~1000000
>>
>>
>> Relating thread count to Little’s Law tells us that virtual threads can
>> support a higher average arrival rate of requests (throughput), or a higher
>> average time to process a request (latency), than platform threads could:
>>
>>
>> T = λW <= ~1000000
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20220729/270ad709/attachment.htm>


More information about the loom-dev mailing list