[jmm-dev] VarHandle.safepoint() methods

Andrew Haley aph at redhat.com
Wed Jan 4 18:04:28 UTC 2017


This is a proposal for a new VarHandle method, but it also is an
extension to the Java Memory Model, so I'm starting the discussion
here.

My intention is to provide an efficient way to handle the case where a
field has reads which outnumber writes by several orders of magnitude.
Once a field has been successfully modified, no thread may observe a
stale value.  Writes to a field are expected to be very rare, and can
be expensive.

It provides a secure and fast way to do something like a volatile
access but without any memory fences (or even any restrictions on
memory reordering) on the reader side.

I propose two new VarHandle methods:

Object getSafepoint()

  Returns the value of a variable, with memory semantics of reading as
  if the variable was declared non-volatile, except that this load
  shall not be reordered with a preceding safepoint.

static void safepoint()

  Wait for a safepoint.  When this method returns, every thread in the
  JVM shall have executed a safepoint.  Ensures that loads and stores
  before the safepoint will not be reordered with loads and stores
  after the safepoint.

[Note that VarHandle.safepoint() does not specify whether the VM
safepoints immediately or the thread waits for a safepoint which would
occur anyway.]

A mechanism like this is needed in order to implement
MappedByteBuffer.unmap() securely and efficiently, but I think this
mechanism is sufficiently useful that it should be exposed as an API.

Some background: MappedByteBuffer.unmap() does not exist.
Implementing it in a sufficiently efficient way is an intractable
problem that has been open since 2002.  It is hard to do because it is
essential that no thread can see the underlying memory once a
MappedByteBuffer has been unmapped, because a new MappedByteBuffer may
have been allocated the same address.  In the current MappedByteBuffer
implementation a buffer is unmapped once the GC has determined it is
not reachable, but there can be a very long delay, and in practice
systems run out of native memory before unmapping happens.

In order to get around this problem, some Java projects have been
using kludges based on Unsafe to access private fields of
MappedByteBuffer and forcibly unmap the buffer.

It is possible to use an indirection wrapper for all accesses to a
hypothetical unmappable MappedByteBuffer, but such an indirection
would need to use some kind of volatile memory read on every access in
order to avoid a race condition where the buffer was closed while
another thread was still accessing it.  ByteBuffers have to be very
fast, and adding a volatile memory access to every MappedByteBuffer
access would render them useless.

>From an implementation point of view, getSafepoint() is a plain read
except that a JIT compiler cannot reorder it with a safepoint.
getSafepoint() can be hoisted out of loops and doesn't inhibit
vectorization, so the overhead of getSafepoint() can be made extremely
low, and hopefully almost zero.

I realize that this definition is problematic in that "safepoints" are
not defined anywhere in the JMM, and it might be tricky to formalize,
but it's sufficiently useful that I believe it's worth the effort.

I also realize that we need much better names for these methods, ones
that do not refer to any HotSpot-specific mechanism.  However, I can't
think of any better names at the moment.

[There is a precedent for this mechanism, but it is internal to the
VM: it is very similar to biased locking.  A biased lock does not
require any memory fences in the case where the lock is biased towards
the current thread, but if the lock is biased towards another thread
the VM safepoints and the bias is removed.  After that, every thread
sees the unbiased lock.]

Finally, a correct (but inefficient) way to implement these methods
would be to use getVolatile() for getSafepoint() and fullFence() for
safepoint().


Example: A closeable MappedByteBuffer.

public class CloseableMappedByteBuffer {

    // Actually unmaps a byte buffer.
    private native void unmap(MappedByteBuffer b);

    private volatile MappedByteBuffer _buf;
    private VarHandle vh;

    private MappedByteBuffer buf() {
        return (MappedByteBuffer) v.getSafepoint();
    }

    public CloseableMappedByteBuffer wrap(MappedByteBuffer buf) {
        return new CloseableMappedByteBuffer(buf);
    }

    public void unmap() {
        MappedByteBuffer buf = buf();
        vh.setOpaque(null);
        VarHandle.safepoint();
        // Now every thread sees the updated _buf, we can unmap it.
        unmap(buf);
    }

    public byte get() {
        return buf().get();
    }

    public byte get(int index) {
        return buf().get(index);
    }

    ...etc, for all the ByteBuffer methods.
}

Andrew.



More information about the jmm-dev mailing list