: Re: Project Loom VirtualThreads hang

Pedro Lamarão pedro.lamarao at prodist.com.br
Fri Jan 6 19:11:29 UTC 2023


Em sex., 6 de jan. de 2023 às 13:39, Robert Engels <rengels at ix.netcom.com>
escreveu:

> I have to agree a bit with Arnaud. I don’t like the idea that I have to
> “reason about” these issues. Imagine a server service that encodes video.
> Highly cpu bound. But then a need to emit heartbeats to keep clients alive.
> I have to know “put heartbeat requests on their own native thread” because
> too many client encoding requests will cause them not to be sent. Or even
> more simply - a very short encoding request takes a very long time because
> it is blocked by other long running requests that got there first. FIFO
> doesn’t lead to the best user experience at times.
>

This is the kind of service I provide in practice, not video encoding, but
data encryption / decryption. Any single request amounts to getting data,
pushing it through some transformation, then forwarding the result.
"Sharing" happens because I/O happens. No healthy service instance will
exhaust processing capacity at any time for the very simple reason that
processing capacity *must* be provisioned to ensure this never happens. If
this ever happens, as a consequence of underprovisioned processing
capacity, request latency would start to increase unpredictably and we
would miss quality targets. The scenario where a processing unit in some
data pipeline exhausts processing capacity is unacceptable in general. Time
sharing merely allows the system to operate minimally instead of becoming
absolutely unresponsive. It cannot solve the actual problem. Processing is
not special in this matter. If you read from disk, and you exhaust "I/O
slices", the problem would be the same.

-- 
Pedro Lamarão
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/loom-dev/attachments/20230106/07024339/attachment.htm>


More information about the loom-dev mailing list