: Re: Project Loom VirtualThreads hang

Arnaud Masson arnaud.masson at fr.ibm.com
Fri Jan 6 12:04:25 UTC 2023


Well, weight for CPU share is what Kubernetes does at the Pod level for example.

As discussed, if we don’t have such mechanism, a single request that is internally parallelized can have too much weight.
I don’t say it’s a super urgent feature, it will just be as problematic as before 😊

I had this problem myself on prod some time ago, pre-loom, with classic threads.
The consequence is that people tend to be very conservative and underutilize cores.

Thanks
Arnaud


You are making the assumption that any kind of clever scheduling would be able to substantially improve server workloads at 100% CPU, but this hypothesis is yet to be demonstrated. To consider better scheduling algorithms, we first need to
ZjQcmQRYFpfptBannerStart
You are making the assumption that any kind of clever scheduling would be able to substantially improve server workloads at 100% CPU, but this hypothesis is yet to be demonstrated. To consider better scheduling algorithms, we first need to see real-world cases where the current scheduling causes a problem that requires a solution. We can’t solve a problem before we know what it is.

What we’re looking for is something along the lines of: “I have a server based on virtual threads, and I’ve run into the following scheduling-related problem: …” I expect that such reports will take time to show up, as they require more widespread use of virtual threads in production (or even in integration testing).


Unless otherwise stated above:

Compagnie IBM France
Siège Social : 17, avenue de l'Europe, 92275 Bois-Colombes Cedex
RCS Nanterre 552 118 465
Forme Sociale : S.A.S.
Capital Social : 664 069 390,60 €
SIRET : 552 118 465 03644 - Code NAF 6203Z
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230106/505cb858/attachment.htm>


More information about the loom-dev mailing list