: Re: Project Loom VirtualThreads hang

Ron Pressler ron.pressler at oracle.com
Fri Jan 6 11:33:07 UTC 2023


You are making the assumption that any kind of clever scheduling would be able to substantially improve server workloads at 100% CPU, but this hypothesis is yet to be demonstrated. To consider better scheduling algorithms, we first need to see real-world cases where the current scheduling causes a problem that requires a solution. We can’t solve a problem before we know what it is.

What we’re looking for is something along the lines of: “I have a server based on virtual threads, and I’ve run into the following scheduling-related problem: …” I expect that such reports will take time to show up, as they require more widespread use of virtual threads in production (or even in integration testing).

— Ron

On 6 Jan 2023, at 10:36, Arnaud Masson <arnaud.masson at fr.ibm.com<mailto:arnaud.masson at fr.ibm.com>> wrote:

I think what could be useful is to automatically add scheduling “weight” on vthread in the context of structured concurrency.
(A bit like CPU request/limit at pod level in Kubernetes.)

For example, let’s say I have 2 vthreads on a server processing concurrent http requests (mostly CPU-bound).
If one of them forks 2 sub tasks with a structured concurrency scope, I would like that the weights of the 2 requests remain globally constant:

Initially:


  *   request 1: weight 1.0
  *   request 2: weight 1.0


after fork:


  *   request 1 (blocked waiting for sub tasks)
     *   sub task 1: weight 0.5
     *   sub task 2: weight 0.5
  *   request 2: weight 1.0



if 3 cores available, the 3 actives tasks can run at 100% (and request1 will complete faster).
If 2 cores only, request 2 would be at 100% on a core, and the 2 sub tasks of request1 would get 50% of a core each, so request1 forking won’t have negative impact on request 2.

In other words, that would allow to optimize for multicore (inside a request handler) without breaking scheduling fairness.

Thanks
Arnaud


> On 5 Jan 2023, at 21:24, Robert Engels <rengels at ix.netcom.com<mailto:rengels at ix.netcom.com>> wrote:
>
> I think it would be better to be able to park/unpark on a monitor efficiently. I think the overhead in handling that common case is more significant than expected.

We know; it’s work-in-progress, but will take some time.

>
> I like the idea of carrying over the Thread.priority in vthreads to allow finer control over the scheduling.

Scheduling with priorities can either be done well or fast but not both. Non-realtime kernels schedule relatively fast, but their priority implementation isn’t that good; realtime kernels do priorities well, but their scheduling is very slow (realtime always trades off predictability for speed). Respecting priorities becomes even harder when there are lots and lots of threads, and problems such as priority inversion certainly don’t get easier. For the common cases where you have a small set of low-priority threads doing work in the background, the easiest solution is just to use platform threads for those.

As in the case of time-sharing, the need for priorities is something that will need to be demonstrated in real projects in the field to justify addressing.

— Ron
Unless otherwise stated above:

Compagnie IBM France
Siège Social : 17, avenue de l'Europe, 92275 Bois-Colombes Cedex
RCS Nanterre 552 118 465
Forme Sociale : S.A.S.
Capital Social : 664 069 390,60 €
SIRET : 552 118 465 03644 - Code NAF 6203Z

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230106/6b987da3/attachment-0001.htm>


More information about the loom-dev mailing list