RFR: 8207851 JEP Draft: Support ByteBuffer mapped over non-volatile memory

Stuart Marks stuart.marks at oracle.com
Fri Sep 28 20:50:44 UTC 2018

On 9/28/18 12:21 AM, Peter Levart wrote:
> I mostly agree with your assessment about the suitability of the ByteBuffer API 
> for nice multithreaded use. What would such API look like? I think pretty much 
> like ByteBuffer but without things that mutate mark/position/limit/ByteOrder. A 
> stripped-down ByteBuffer API therefore. That would be in my opinion the most 
> low-level API possible. If you add things to such API that coordinate 
> multithreaded access to the underlying memory, you are already creating a 
> concurrent data structure for a particular set of use cases, which might not 
> cover all possible use cases or be sub-optimal at some of them. So I think this 
> is better layered on top of such API not built into it. Low-level multithreaded 
> access to memory is, in my opinion, always going to be "unsafe" from the 
> standpoint of coordination. It's not only the mark/position/limit/ByteOrder that 
> is not multithreaded-friendly about ByteBuffer API, but the underlying memory 
> too. It would be nice if mark/position/limit/ByteOrder weren't in the way though.

Right, getting mark/position/limit/ByteOrder out of the way would be a good 
first step. (I just realized that ByteOrder is mutable too!)

I also think you're right that proceeding down a "classic" thread-safe object 
design won't be fruitful. We don't know what the right set of operations is yet, 
so it'll be difficult to know how to deal with thread safety.

One complicating factor is timely deallocation. This is an existing problem with 
direct buffers and MappedByteBuffer (see JDK-4724038). If a "buffer" were 
confined to a single thread, it could be deallocated safely when that thread is 
finished. I don't know how to guarantee thread confinement though.

On the other hand, if a "buffer" is exposed to multiple threads, deallocation 
requires that proper synchronization and checking be done so that subsequent 
operations are properly checked (so that they do something reasonable, like 
throw an exception) instead of accessing unmapped or repurposed memory. If 
checking is done, this pushes operations to be coarser-grained (bulk) so that 
the checking overhead is amortized over a more expensive operation.

I know there has been some thought put into this in the Panama project, but I 
don't know exactly where it stands at the moment. See the MemoryRegion and Scope 


More information about the core-libs-dev mailing list