RFR (S) 8049304: race between VM_Exit and _sync_FutileWakeups->inc()
Daniel D. Daugherty
daniel.daugherty at oracle.com
Fri Aug 28 19:46:52 UTC 2015
On 8/28/15 12:27 PM, Kim Barrett wrote:
> On Aug 28, 2015, at 11:52 AM, Daniel D. Daugherty <daniel.daugherty at oracle.com> wrote:
>> This comment:
>>
>> 570 // Keep a tally of the # of futile wakeups.
>> 571 // Note that the counter is not protected by a lock or updated by atomics.
>> 572 // That is by design - we trade "lossy" counters which are exposed to
>> 573 // races during updates for a lower probe effect.
>>
>> and this comment:
>>
>> 732 // Keep a tally of the # of futile wakeups.
>> 733 // Note that the counter is not protected by a lock or updated by atomics.
>> 734 // That is by design - we trade "lossy" counters which are exposed to
>> 735 // races during updates for a lower probe effect.
>>
>> are not really specific to the monitor subsystem. I think
>> the comments are generally true about the perf counters.
> Yes, but oddly placed here.
>
>> As we discussed earlier in the thread, generally updating the perf
>> counters with syncs or locks will cost and potentially perturb the
>> things we are trying to count.
> Yes.
>
>> So I think what you're proposing is putting a lock protocol around
>> the setting of the flag and then have the non-safepoint-safe uses
>> grab that lock while the safepoint-safe uses skip the lock because
>> they can rely on the safepoint protocol in the "normal" exit case.
>>
>> Do I have this right?
> Yes. My question is, does the extra overhead matter in these specific cases.
> And the locking mechanism might be some clever use of atomics rather than
> any sort of “standard" mutex.
I figure the lightest I can get away with is an acquire. There's
an existing lock for the perf stuff, but I don't want to use a
full blow mutex...
> And the safepoint-safe uses not only skip the lock,
Agreed.
> but don’t need to check the
> flag at all.
I don't agree with this part. Until the VMThread exits and
raises the permanent safepoint barrier for remaining daemon
threads, I don't think I can guarantee that we won't go to
a safepoint, clear the flag & free the memory, and then
return from that safepoint... which would allow a daemon
thread to access the now freed memory...
Dan
More information about the serviceability-dev
mailing list