[foreign-memaccess] wrapping up JVMLS 2019

Maurizio Cimadamore maurizio.cimadamore at oracle.com
Fri Aug 2 23:54:00 UTC 2019


Hi,
I'm just heading back home after a great week at the JVMLS 2019, where I 
gave a talk about the foreign memory access API [1]. The talk was very 
well received and we had a follow-up breakout session in the context of 
the OpenJDK Committer Workshop, which was co-located with JVMLS.

I'd like to summarize the feedback I've received on the API (of course 
this is "my side of the elephant", of course there can be other sides 
there :-)):

* The MemorySegment API in itself feels 'right'. It has the right level 
of abstraction, and it allows enough flexibility to peek/poke at memory 
using var handles, which will make it easier to implement things like 
off-heap tensors and Python-like ndarray in Java

* There seem to be an agreement that there are at least two category of 
users of this API: casual users who might just want to allocate a 
segment in a single shot, serialize some objects off-heap. For these 
users the single-threaded limitations of the API, as well as the 
requirement to always 'close' the segment via try-with-resources is 
probably fine.

* There is also another category of users - high-performance frameworks 
which, of course care about a bunch of other things:

- allocation performances: don't want to do one malloc per allocation 
request
- what to do about 'forgotten' segments - e.g. segments that go out of 
GC scope, but that have not been closed
- need to handle things like concurrency, e.g. accessing same segment 
from multiple threads

As we were chatting about the various alternatives, and some of the 
things described in the concurrency writeup I published few weeks ago 
[2], it seems to me that there's a logical split to be made here, where 
we keep the current API for the simple use cases (after all the API is 
fine for those) - and then we could build an higher-level 
allocator/scope/whatever which will handle the remaining concerns. That 
is, if you are an advanced user you probably want to create your 
segments indirectly, through an allocator:

try (MemoryPool pool : MemoryPool.ofNative()) { // big slab of memory 
allocated here

      MemorySegment segment = pool.segment(layout);
      ... //segment can be shared

} // pool memory de-allocated here


This moves allows us to put all the complexity in the allocator - which 
will be responsible for handing over chunks of a private segment to 
clients - clients will be able to either close pooled segments (thus 
returning them into the pool) and, the pool can also use a Cleaner to 
make sure that GC-unreacheable segments will have a chance to be 
forcibly closed. At the same time, since the underlying native memory 
will be guaranteed to be 'available' for the entire life-cycle of the 
pool, that means that all segments created within a pool can safely be 
accessed by multiple threads concurrently. Under the curtains, we can 
keep a counter in the pool of how many segments are 'pending' and only 
allow a pool::close if that count is zero (this is similar to some of 
the conclusions reached in [2]).

I think I like this a lot - not only it builds nicely _on top_ of the 
memory segment API, but it also addresses problems that users of the 
memory segment API are likely to run into and try to implement ad-hoc 
solutions for anyway.

I'll keep thinking about it, but so far it looks like a promising direction.

Cheers
Maurizio

[1] - https://www.youtube.com/watch?v=r4dNRVWYaZI
[2] - http://cr.openjdk.java.net/~mcimadamore/panama/confinement.html







More information about the panama-dev mailing list