Loom and reading/writing of data

Kasper Nielsen kasperni at gmail.com
Thu Nov 21 12:09:32 UTC 2019


> TBD at least for reading where overloads that lazily supply byte[] or
> buffer have been suggested to reduce memory usage when there are tens of
> thousands of virtual threads blocked reading from sockets. Writing is
> different as the bytes to send on the network will usually be
> accumulated in a byte[] or buffer before calling the API to write.

Wouldn't it possible to do something similar to what Panama is doing with
memory access API?

For example, for network IO you rarely need the random access features that
ByteBuffer provides, sequential access is almost always enough. If we
drop random
access support, we only need to know about the next element to write and/or the
next element to read.

So if we define a new VarHandle that supports blocking and push all buffer/
memory handling down into the VM.

Users could define their network protocol data types, using some built-in types
VarHandle H1 = IOHandlers.intLE();
VarHandle H2 = IOHandlers.intUnsigned();
VarHandle H3 = IOHandlers.line(); //UTF-8
VarHandle H4 = IOHandlers.line(StandardCharsets.US_ASCII);

And then use these varhandles to operate directly on the network channel.
int errorCode = (int) H2.get(channel);
String nextLine = (String) H3.get(channel);
H2.write(channel, 12345);

In addition to not requiring millions of bytebuffers handled by the user, if we
are not exposing byte-buffers or byte-arrays directly to the user. The vm is
free to use any kind of tricks. For example, not nulling out data, ring-buffers
(with implicit wrap around), non-continues memory, only scheduling reads if
there is enough data available to satisfy the next varhandle.get, block writes
if there is not enough buffer capacity, ect.

Obvious this would be a really low-level API, that you would want to supplement
with a more high-level API.

/Kasper


More information about the loom-dev mailing list