[External] : Re: jstack, profilers and other tools

Daniel Avery danielaveryj at gmail.com
Fri Jul 29 00:42:53 UTC 2022


whoops, that should have been

T' <= ~1000000
T <= max(T')

On Thu, Jul 28, 2022 at 6:10 PM Daniel Avery <danielaveryj at gmail.com> wrote:

> > Or, putting it in yet another way, you need support not for the average
> number of requests in the system, but for the maximum number of requests in
> the system.
>
> That's a fair point, but I think the result (T = λW <= ~1000000) still
> holds. If we broke this step
>
> T <= ~1000000
>
> into multiple steps
>
> T' <= ~1000000
> T <= T'
>
> Where
> - T' is the instantaneous number of threads (or, requests)
> - T is the average number of threads
>
> Then the other steps still hold, but we've clarified that our system can
> never exceed ~1000000 instantaneous concurrent requests (or else it will
> run out of memory / crash / become "unstable" in a way that makes Little's
> Law inapplicable)
>
> -Daniel
>
> On Thu, Jul 28, 2022 at 5:22 PM Alex Otenko <oleksandr.otenko at gmail.com>
> wrote:
>
>> Or, putting it in yet another way, you need support not for the average
>> number of requests in the system, but for the maximum number of requests in
>> the system.
>>
>> In a system with finite thread count this translates into support of
>> arbitrarily long request queues (which don't depend on request rate, too).
>> In a thread-per-request system it necessarily is arbitrarily large number
>> of threads. (Of course, this is only a model of some ideal system)
>>
>> Thank you for bearing with me.
>>
>> Alex
>>
>> On Fri, 29 Jul 2022, 00:11 Alex Otenko, <oleksandr.otenko at gmail.com>
>> wrote:
>>
>>> Thanks.
>>>
>>> That works under _assumption_ that the time stays constant, which is
>>> also a statement made in the JEP.
>>>
>>> But we know that with thread count growing the time goes down. So one
>>> needs more reasoning to explain why T goes up - and at the same time keep
>>> the assumption that time doesn't go down.
>>>
>>> In addition, it makes sense to talk of thread counts when that presents
>>> a limit of some sort - eg if N threads are busy and one more request
>>> arrives, the request waits for a thread to become available - we have a
>>> system with N threads. Thread-per-request does not have that limit: for any
>>> number of threads already busy, if one more request arrives, it still gets
>>> a thread.
>>>
>>> My study concludes that thread-per-request is the case of infinite
>>> number of threads (from mathematical point of view). In this case talking
>>> about T and its dependency on request rate is meaningless.
>>>
>>> Poisson distribution of requests means that for any request rate there
>>> is a non-zero probability for any number of requests in the system - ergo,
>>> the number of threads. Connecting this number to a rate is meaningless.
>>>
>>> Basically, you need to support "a high number of threads" for any
>>> request rate, not just a very high request rate.
>>>
>>> On Thu, 28 Jul 2022, 23:35 Daniel Avery, <danielaveryj at gmail.com> wrote:
>>>
>>>> Maybe this is not helpful, but it is how I understood the JEP
>>>>
>>>>
>>>> This is Little’s Law:
>>>>
>>>>
>>>> L = λW
>>>>
>>>>
>>>> Where
>>>>
>>>> - L is the average number of requests being processed by a stationary
>>>> system (aka concurrency)
>>>>
>>>> - λ is the average arrival rate of requests (aka throughput)
>>>>
>>>> - W is the average time to process a request (aka latency)
>>>>
>>>>
>>>> This is a thread-per-request system:
>>>>
>>>>
>>>> T = L
>>>>
>>>>
>>>> Where
>>>>
>>>> - T is the average number of threads
>>>>
>>>> - L is the average number of requests (same L as in Little’s Law)
>>>>
>>>>
>>>> Therefore,
>>>>
>>>>
>>>> T = L = λW
>>>>
>>>> T = λW
>>>>
>>>>
>>>> Prior to loom, memory footprint of (platform) threads gives a bound on
>>>> thread count:
>>>>
>>>>
>>>> T <= ~1000
>>>>
>>>>
>>>> After loom, reduced memory footprint of (virtual) threads gives a
>>>> relaxed bound on thread count:
>>>>
>>>>
>>>> T <= ~1000000
>>>>
>>>>
>>>> Relating thread count to Little’s Law tells us that virtual threads can
>>>> support a higher average arrival rate of requests (throughput), or a higher
>>>> average time to process a request (latency), than platform threads could:
>>>>
>>>>
>>>> T = λW <= ~1000000
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20220728/cbb88294/attachment-0001.htm>


More information about the loom-dev mailing list