Wrapping virtual threads in predefined ExecutorService.

Andrii Lomakin lomakin.andrey at gmail.com
Sat Jun 17 12:00:05 UTC 2023


Attila,

While I understand and respect your opinion, I must disagree. Through
practical examples, it has become evident that thread-per-core architecture
and asynchronous IO are incredibly beneficial in database development.
These techniques are used in ScyllaDb and their SeaStart framework (written
in C) https://github.com/scylladb/seastar, and are demonstrated in a
project I took part in myself
https://www.datadoghq.com/blog/engineering/introducing-glommio/,
https://github.com/DataDog/glommio (written in Rust). The results have been
quite noticeable in practice.

On Sat, Jun 17, 2023 at 1:12 PM Attila Kelemen <attila.kelemen85 at gmail.com>
wrote:

> I'm not sure where you want to improve performance. Suppose we could
> create multiple virtual thread groups (each with an arbitrary number of
> carrier threads specified at group creation time). Assuming you have N
> cores, you could create N such groups with a single carrier thread each
> (then you somehow pin that carrier thread to a given core). Now you could
> execute virtual threads on a given core, but then you seem to lose most of
> the advantages, because virtual threads (even if they are running on the
> same core) are concurrent, so you can't really avoid synchronization at
> this point. Not to mention that you will get the overhead of virtual thread
> scheduling. I think if you really want some very low level optimization,
> then you probably have to use platform threads instead of virtual threads,
> and then write your tightly optimized code there.
>
> Andrii Lomakin <lomakin.andrey at gmail.com> ezt írta (időpont: 2023. jún.
> 17., Szo, 11:29):
>
>> Yes, that is what I intended to say.
>> I apologize for any confusion. Is there a plan to incorporate this
>> feature in the future?
>>
>>

-- 
Best regards,
Andrii Lomakin.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230617/2a0ce389/attachment.htm>


More information about the loom-dev mailing list