segments and confinement

Andrew Haley aph at redhat.com
Sat Jul 4 12:35:25 UTC 2020


On 03/07/2020 12:47, Vladimir Ivanov wrote:
>
>>> Other approaches we're considering are a variation of a scheme proposed
>>> originally by Andrew Haley [2] which uses GC safepoints as a way to
>>> prove that no thread is accessing memory when the close operation
>>> happens. What we are investigating is as to whether the cost of this
>>> solution (which would requite a stop-the-world pause) can be ameliorated
>>> by using thread-local GC handshakes ([3]). If this could be pulled off,
>>> that would of course provide the most natural extension for the memory
>>> access API in the multi-threaded case: safety and efficiency would be
>>> preserved, and a small price would be paid in terms of the performances
>>> of the close() operation (which is something we can live with).
>>
>> I don't think that the cost of the safepoint is so very important.
>
> Though the cost of safepoint itself may be negligible for readers in
> such scenario (frequent reads, rare writes), there are consequences
> which affect performance on reader side. (In the worst case, it
> turns a single memory access into the access preceded by a load &
> branch or a load + an indirect access.)

It definitely does, yes. But that only matters in performance terms if
the access check is executed frequently. In practice that means it's
in a loop, and we have a compiler (or two) which should be well
capable of hoisting loads out of loops. C2 may not reliably do that
today, but we can fix it.

>> Firstly, as you note, all we need to do is null the (unique) pointer
>> to the segment and then briefly safepoint all of the mutator threads
>> in turn: the safepoint doesn't have to do anything, we just need to
>> make sure that every thread has reached one. With an efficient
>> safepoint mechanism this could be a very low-cost option. Then we
>> close any memory map underlying the segment.
>
> There's another invariant: liveness check on the segment (re-reading
> unique pointer in the terms you use) and the access shouldn't be
> separated by a safepoint. It may result in repeated checks/loads that
> would be redundant otherwise.

Absolutely. The technique can only work if the compiler knows what
it's doing.

>> Secondly, you don't have to wait for the safepointing to happen.
>> Instead, null the pointer to the segment, send a message to the VM
>> thread and then carry on with something else; when a safepoint or a
>> thread-local handshake happens for some other reason, the VM thread
>> itself can close the memory map.
>
> That's definitely an option for VM allocated/mapped memory. But for a
> custom cleanup actions it'll require running arbitrary Java code from VM
> thread.

True, so in that case you'd need a thread. In practice I guess the
difference would be small, but the job of unmapping a segment is so
simple I think you might as well.

> I think there's a potential to turn safepoint-based mechanism being
> discussed into a powerful synchronization primitive. Forcing a
> global safepoint becomes a barrier which guarantees that new state
> is visible to all readers and it becomes safe to perform some
> actions after the barrier is crossed.

Yes, that's what I was suggesting back in 2017 when I wrote the
message to which Maurizio referred. It never was just about native
buffers.

-- 
Andrew Haley  (he/him)
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
https://keybase.io/andrewhaley
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671



More information about the panama-dev mailing list