Cache topology aware scheduling

Danny Thomas dannyt at netflix.com
Sat Sep 7 13:23:05 UTC 2024


Yup, that’s it exactly. I’m planning on trying cluster affinity based on
the current CPU for submitting/unparking platform thread, and was going to
explore balancing while I was at it.

On Sat, 7 Sep 2024 at 11:15 PM, Alan Bateman <alan.bateman at oracle.com>
wrote:

> On 06/09/2024 08:10, Danny Thomas wrote:
>
> Thanks Franz, Alan,
>
> I spun up a quick experiment with a custom scheduler here:
>
> https://github.com/DanielThomas/virtual-threads-cluster-aware
> <https://urldefense.com/v3/__https://github.com/DanielThomas/virtual-threads-cluster-aware__;!!ACWV5N9M2RV99hQ!LpIm0wPe4UjF5tJvF0CQKTE98rTBgsqe-WIHDLDCt3HheQHfb-ldWdCf7vfWNxI8nEmUjf3AyP91_aq0XA$>
>
>
> Would I be correct to say that this experiment is a FJP (in async/fifo
> mode) for each "cluster" with the worker threads bound to the processors in
> that cluster. A "front-end" scheduler forwards tasks to one of the FJP
> instances. If a platform thread starts or unparks a virtual thread then the
> target's task will be submitted to a random FJP instance. If a virtual
> thread starts or unparks another virtual thread then it will be submitted
> to the "current" FJP.  Assuming I have this right thing I would expect it
> works well for workloads where there are platform threads in the picture as
> that will have the effect of balancing the load across the FJP instances.
> In other cases then I assume it could be a bit unbalanced, at least not
> without something that nudges virtual threads to other clusters.
>
>
> -Alan
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20240907/afd91aea/attachment.htm>


More information about the loom-dev mailing list