RFR 8224875: Shenandoah: ParallelCleaning code unloading should take lock to protect shared code roots array

Aleksey Shipilev shade at redhat.com
Wed May 29 13:37:56 UTC 2019


On 5/29/19 1:40 PM, Zhengyu Gu wrote:
> On 5/29/19 7:31 AM, Aleksey Shipilev wrote:
>> On 5/29/19 1:29 PM, Zhengyu Gu wrote:
>>>> *) Since you moved the assert in ShenandoahCodeRoots::remove_nmethod, is it worth it to move it in
>>>> ShenandoahCodeRoots::add_nmethod too?
>>>>
>>> Sure.
>>
>> So... should we make (add|remove)_nmethod symmetrical? If remove_* can acquire the lock, then add_*
>> should do that too?
> I don't think add_nmethod needs additional lock, cause it is always called with CodeCache_lock
> outside safepoints. The new lock only protects concurrent workers when unregister nmethods during
> safepoint cleanup.

Okay, this asymmetry still irks me.

I see G1CollectedHeap::register_nmethod eventually calls into
HeapRegion::add_strong_code_root_locked, which does assert_locked_or_safepoint(CodeCache_lock), and
then does the complicated check like this downstream:

  assert((CodeCache_lock->owned_by_self() ||
         (SafepointSynchronize::is_at_safepoint() &&
          (_m.owned_by_self() || Thread::current()->is_VM_thread()))),

How much does it really hurt for us to be on a safe side and grab the lock on add_* path too? We
would normally enter there with CodeCache lock held, and this might just protect us from some rare
corner case.

Thanks,
-Aleksey



More information about the shenandoah-dev mailing list