RFR (S): 8228857: Refactor PlatformMonitor into PlatformMutex and PlatformMonitor

Kim Barrett kim.barrett at oracle.com
Wed Aug 7 20:18:20 UTC 2019


> On Aug 7, 2019, at 9:50 AM, Per Liden <per.liden at oracle.com> wrote:
> 
> Hi Kim/David,
> 
> On 8/7/19 12:10 AM, David Holmes wrote:
> [...]
>>> It is true that only a pthread_cond_t that has been allocated and
>>> later destroyed can cause problems. However, we (the VM) might not be
>>> the only place doing such. Native code, library code, &etc might also
>>> do that. Even the current mechanism doesn't fully protect us from
>>> that. This was something that Patricio and I recognized when we did
>>> the MacOSX workaround; it's not completely bullet proof.
>>> 
>>> However, with the current mechanism of allocating pthread_mutex/cond_t
>>> pairs and freelisting them, we highwater pretty quickly. We don't have
>>> many places that actually release them. (I don't remember whether
>>> there's anything other than the task terminator mechanism.) And once
>>> we've highwatered, the VM is safe from any further condvar
>>> destructions by non-VM code. So the only risk is from native code
>>> using pthread_cond_t's before that highwater mark is reached.
>>> 
>>> If the VM only freelists its pthread_cond_t's and not its
>>> pthread_mutex_t's, instead allocating and deleting the latter normally
>>> (as Per suggested and I had originally intended to suggest), then the
>>> VM's pthread_mutex_t's are subject to getting scrambled by ongoing
>>> non-VM pthread_cond_t usage.
>> Yes you are right. I was only thinking about VM usage, but if the VM is hosted then mutex/condvar could be being used a lot and not using the freelist for the mutex would greatly increase the risk of reintroducing the bug.
>> As you say the freelist approach itself is not bullet-proof and I'm now worried about what complex hosting applications may encounter! :( I think we may need to try and engage with Apple to get the full details on this bug so that we know what our exposure may be. I hope they fixed it with an interim patch and not only with a full release update!
>> So in terms of this RFE I leave things as they stand, with the mutex/cond freelist being used in PlatformMutex. I hope the slight space wastage is not significant enough to mean that Zlocks will not consider using PlatformMutex.
> 
> It would be really interesting to know if this has been fixed by Apple, and hopefully also backported to the macos versions we support. On systems that has this bug, we could have all kinds of random VM memory corruption (caused by e.g. native code/library using condvars), which we can't fix. Right?

The symptoms of the bug are (1) mutex_unlock aborts, or (2) mutex_lock
hangs.  Both quite bad, but neither is random memory corruption.

Patricio created a standalone reproducer and attached it to
JDK-8218975 (stale_entry_test.c).  (It was also included in the bug
report Patricio filed with Apple.  It seems only the person filing a
bug has external visibility on its status.)

There were some possibly relevant code changes in Mojave, but I don't
know if anyone has tried running that test on Mojave.

> May I propose a third alternative, which keeps the best of both worlds: Don't mix the two types, i.e. don't let PlatforMonitor inherit from PlatformMutex. So, we'd just keep exactly what we have and just add a PlatformMutex type, which for posix would be just:
> 
> class PlatformMutex : public CHeapObj<mtSynchronizer> {

This doesn't help, if PlatformMutex's get dynamically allocated on an
ongoing basis.  The pthread_mutex_t might end up being at the same
address as some problematic pthread_cond_t that was allocated, used,
and freed by 3rd party code.

We could forbid such usage.  Or perhaps we could use arena allocation
for PlatformMutex?  Fixed arena size, allocated up front, would
prevent later 3rd party condvar releases from clobbering our mutexes.

It still doesn't help against an already bad destroyed condvar
existing when the VM is started.  But that's not new; we don't have
any way to protect against that.




More information about the hotspot-dev mailing list