[jmm-dev] VarHandle.safepoint() methods

David Holmes david.holmes at oracle.com
Thu Jan 5 13:03:48 UTC 2017


On 5/01/2017 7:35 PM, Andrew Haley wrote:
> On 04/01/17 21:47, David Holmes wrote:
>> Hi Andrew,
>>
>> On 5/01/2017 4:04 AM, Andrew Haley wrote:
>>> This is a proposal for a new VarHandle method, but it also is an
>>> extension to the Java Memory Model, so I'm starting the discussion
>>> here.
>>>
>>> My intention is to provide an efficient way to handle the case where a
>>> field has reads which outnumber writes by several orders of magnitude.
>>> Once a field has been successfully modified, no thread may observe a
>>> stale value.  Writes to a field are expected to be very rare, and can
>>> be expensive.
>>>
>>> It provides a secure and fast way to do something like a volatile
>>> access but without any memory fences (or even any restrictions on
>>> memory reordering) on the reader side.
>>>
>>> I propose two new VarHandle methods:
>>>
>>> Object getSafepoint()
>>>
>>>   Returns the value of a variable, with memory semantics of reading as
>>>   if the variable was declared non-volatile, except that this load
>>>   shall not be reordered with a preceding safepoint.
>>
>> I'm no compiler guy but I would think, given the compiler has no idea
>> what may lead to a safepoint, that this would have to preclude any
>> reordering of any kind around the load.
>
> We have a pretty good idea what needs to be done to the compiler to
> make this work.  It's not easy, but neither is it impossible.

I understand that the JIT knows when it inserts a safepoint poll, but 
otherwise I don't believe the JIT knows which calls into the runtime 
might lead to safepoints. I guess it will just have to assume the worst.

But I'm also unclear why this is being described in terms of the JIT - 
the API semantics would have to be honoured by interpreted bytecode as well.

>>> static void safepoint()
>>>
>>>   Wait for a safepoint.  When this method returns, every thread in the
>>>   JVM shall have executed a safepoint.  Ensures that loads and stores
>>>   before the safepoint will not be reordered with loads and stores
>>>   after the safepoint.
>>>
>>> [Note that VarHandle.safepoint() does not specify whether the VM
>>> safepoints immediately or the thread waits for a safepoint which would
>>> occur anyway.]
>>
>> So you're relying on the existence of a global synchronization point
>> (aka "safepoint") in the VM and then using that to achieve "global
>> synchronization" without having to do any work in the common case.
>
> Yes.
>
>> As an API I find that somewhat ill-defined. I would not want to see VM
>> safepoints enshrined in a user-level API. Nor would I want to see a
>> distinct global synchronization mechanism.
>
> Well, the global synchronization mechanism exists already in most
> JVMs, and there's not much chance of it going away.  All this does
> is expose it.

Exposing it - when "it" is an internal implementation detail - is what I 
am concerned about.

>> In terms of implementation the existing VM safepoint mechanism has no
>> means to "wait for a safepoint", but could trivially request one. I'll
>> also note that we're always looking at ways to remove the need for
>> global safepoints in the VM, so a request may be essential - though when
>> using GuaranteedSafepointInterval a sleep of that duration would ensure
>> a global safepoint has occurred.
>
> Sure.  I was thinking of a version of VarHandle.safepoint() which took a
> timeout: if a safepoint didn't arrive after N seconds, force one.

I'd be interested to hear what mechanism you envisage for waiting for, 
and detecting, that a safepoint has arrived.

David
-----

>>> A mechanism like this is needed in order to implement
>>> MappedByteBuffer.unmap() securely and efficiently, but I think this
>>> mechanism is sufficiently useful that it should be exposed as an API.
>>
>> I'm unclear how you deal with detecting the buffer has been unmapped
>> since the call to buf() in buf().get() ? Are you also requiring that no
>> safepoint can occur in that interval?
>
> Yes, exactly.  The compiler knows that there is no safepoint in that
> interval.
>
>>> From an implementation point of view, getSafepoint() is a plain read
>>> except that a JIT compiler cannot reorder it with a safepoint.
>>> getSafepoint() can be hoisted out of loops and doesn't inhibit
>>> vectorization, so the overhead of getSafepoint() can be made extremely
>>> low, and hopefully almost zero.
>>
>> How does a JIT know what may lead to a safepoint?
>
> I don't know what you mean by this question.  It already knows this.
>
>> How can you hoist out of a loop if there is a safepoint check on
>> each loop iteration?
>
> You can't, but not all loops have safepoints.
>
> Andrew.
>


More information about the jmm-dev mailing list