Performance of pooling virtual threads vs. semaphores
Attila Kelemen
attila.kelemen85 at gmail.com
Wed May 29 23:03:58 UTC 2024
Yeah, just realized that and sent my email pretty much literally a second
after your email :)
Anyway, while yes in theory 1M threads are contending for the semaphore,
but I don't think that should be a problem, because the contention is
rather theoretical, since those "contending" VTs are just sitting in a
queue, and after each release only one of them should be released. Also, I
think Liam's comparison is fair, because none of the other two methods push
back, so pushing back only in the VT version would be very unfair.
robert engels <rengels at ix.netcom.com> ezt írta (időpont: 2024. máj. 30.,
Cs, 0:58):
> I remember that too, but in this case I don’t think it is the cause.
>
> In the bounded/pooled thread scenario - you are only scheduling 600
> threads (either platform or virtual).
>
> In “scenario #2” all 1M virtual threads are created and are contending on
> a sempahore. This contention on a single resource does not occur in the
> other scenarios - this will lead to thrashing of the scheduler.
>
> I suspect if it is run under a profiler it will be obvious. With 128
> carrier threads, you have increased the contention over a typical machine
> by an order of magnitude.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20240530/f0f2decc/attachment.htm>
More information about the loom-dev
mailing list