: Re: Project Loom VirtualThreads hang
Arnaud Masson
arnaud.masson at fr.ibm.com
Fri Jan 6 19:45:43 UTC 2023
I can’t see how stop-the-world effect can be avoided once all your carriers are busy with non-switchable CPU-bound tasks. Maybe I’m missing something 😊
Not very different from other pinning problems (JNI...), except the argument that 100% CPU usage should never occur so not a problem.
I will try to make some test to simulate and post the result here.
Thanks
Arnaud
I don’t think that increasing the scheduler’s parallelism would help, nor do I think you’d see a “stop-the-world”, but again, these hypotheses are just not actionable. There’s nothing we can do to address them. When you find a problem, please report it and we’ll investigate what can be done.
— Ron
On 6 Jan 2023, at 19:11, Arnaud Masson <arnaud.masson at fr.ibm.com<mailto:arnaud.masson at fr.ibm.com>> wrote:
I don’t think having 100% CPU usage on a pod is enough to justify a “stop-the-world” effect on Loom scheduling for the other tasks.
Also 100% is the extreme case, but there can be 75% CPU usage, meaning only 1 carrier left for all other tasks in my example.
Again not a blocker I guess, just have to increase the carrier count to mitigate, but it’s good old native thread sizing again where it should not be really needed.
“Time-sharing would make those expensive tasks complete in a lot more than 10 seconds”:
I understand there would be switching overhead (so it’s slower), but I don’t understand why it would be much slower if there are few of them like in my example.
thanks
Arnaud
Unless otherwise stated above:
Compagnie IBM France
Siège Social : 17, avenue de l'Europe, 92275 Bois-Colombes Cedex
RCS Nanterre 552 118 465
Forme Sociale : S.A.S.
Capital Social : 664 069 390,60 €
SIRET : 552 118 465 03644 - Code NAF 6203Z
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230106/f87ce9b5/attachment.htm>
More information about the loom-dev
mailing list