segments and confinement

Vladimir Ivanov vladimir.x.ivanov at oracle.com
Fri Jul 3 11:47:40 UTC 2020


>> Other approaches we're considering are a variation of a scheme proposed
>> originally by Andrew Haley [2] which uses GC safepoints as a way to
>> prove that no thread is accessing memory when the close operation
>> happens. What we are investigating is as to whether the cost of this
>> solution (which would requite a stop-the-world pause) can be ameliorated
>> by using thread-local GC handshakes ([3]). If this could be pulled off,
>> that would of course provide the most natural extension for the memory
>> access API in the multi-threaded case: safety and efficiency would be
>> preserved, and a small price would be paid in terms of the performances
>> of the close() operation (which is something we can live with).
> 
> I don't think that the cost of the safepoint is so very important.

Though the cost of safepoint itself may be negligible for readers in 
such scenario (frequent reads, rare writes), there are consequences 
which affect performance on reader side. (In the worst case, it turns a 
single memory access into the access preceded by a load & branch or a 
load + an indirect access.)

> Firstly, as you note, all we need to do is null the (unique) pointer
> to the segment and then briefly safepoint all of the mutator threads
> in turn: the safepoint doesn't have to do anything, we just need to
> make sure that every thread has reached one. With an efficient
> safepoint mechanism this could be a very low-cost option. Then we
> close any memory map underlying the segment.

There's another invariant: liveness check on the segment (re-reading 
unique pointer in the terms you use) and the access shouldn't be 
separated by a safepoint. It may result in repeated checks/loads that 
would be redundant otherwise.

> Secondly, you don't have to wait for the safepointing to happen.
> Instead, null the pointer to the segment, send a message to the VM
> thread and then carry on with something else; when a safepoint or a
> thread-local handshake happens for some other reason, the VM thread
> itself can close the memory map.

That's definitely an option for VM allocated/mapped memory. But for a 
custom cleanup actions it'll require running arbitrary Java code from VM 
thread.

I think there's a potential to turn safepoint-based mechanism being 
discussed into a powerful synchronization primitive. Forcing a global 
safepoint becomes a barrier which guarantees that new state is visible 
to all readers and it becomes safe to perform some actions after the 
barrier is crossed.

Best regards,
Vladimir Ivanov


More information about the panama-dev mailing list