: Re: Project Loom VirtualThreads hang
Robert Engels
rengels at ix.netcom.com
Fri Jan 6 19:39:29 UTC 2023
I could go into a long description of the 64 core machines we used for hft along with the native thread prioritization we added - along with multiple specialized lock-free queues along with core affinity and numa aware scheduling all to reduce latency and increase throughput.
I think a lot of those techniques still apply to vthreads and having it in the platform would benefit everyone.
> On Jan 6, 2023, at 1:27 PM, Ron Pressler <ron.pressler at oracle.com> wrote:
>
> I don’t think that increasing the scheduler’s parallelism would help, nor do I think you’d see a “stop-the-world”, but again, these hypotheses are just not actionable. There’s nothing we can do to address them. When you find a problem, please report it and we’ll investigate what can be done.
>
> — Ron
>
>> On 6 Jan 2023, at 19:11, Arnaud Masson <arnaud.masson at fr.ibm.com> wrote:
>>
>> I don’t think having 100% CPU usage on a pod is enough to justify a “stop-the-world” effect on Loom scheduling for the other tasks.
>> Also 100% is the extreme case, but there can be 75% CPU usage, meaning only 1 carrier left for all other tasks in my example.
>>
>> Again not a blocker I guess, just have to increase the carrier count to mitigate, but it’s good old native thread sizing again where it should not be really needed.
>>
>> “Time-sharing would make those expensive tasks complete in a lot more than 10 seconds”:
>> I understand there would be switching overhead (so it’s slower), but I don’t understand why it would be much slower if there are few of them like in my example.
>>
>> thanks
>> Arnaud
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230106/63df4fa2/attachment-0001.htm>
More information about the loom-dev
mailing list