[jmm-dev] Another way to punt OOTA

Doug Lea dl at cs.oswego.edu
Sun Nov 2 12:36:19 UTC 2014

On 11/01/2014 10:47 AM, Paul E. McKenney wrote:
> On Fri, Oct 31, 2014 at 10:19:42AM -0400, Doug Lea wrote:
>> On 10/28/2014 09:39 PM, Hans Boehm wrote:
>>> Here's another conceivable approach.  This sounds somewhat crazy based on
>>> our prior assumptions, but it may only be mildly crazy, so here goes.
>> I think some of the details are crazy, but I agree with most of
>> the broad points:
>> First, odds are that we will "solve" OOTA and related speculation
>> constraints, but in a somewhat disappointing way: most likely by
>> defining a form of "dep" such that (rf U dep) is acyclic, but where
>> dep may sometimes be undecidable.  Or any of several variations with
>> similar impact.  Paul et al's N4216
>> (http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2014/n4216.html)
>> presents a meta-litmus test along these lines.  With Alan, Peter,
>> Viktor, and others looking at this from various angles, I'm still
>> optimistic about discovering a solution straddled by C11 (allowing
>> OOTA) and JSR133 (disallowing valid transformations). So it still
>> seems best to imagine that we arrive (out-of-thin-air!) at a core
>> model that suffices, even if in some cases it includes caveats
>> along the lines of "read r could be 42 only if P==NP".
> IMHO, the undecidable piece seems a bit overblown.  If you show me
> an OOTA litmus test that is undecidable, I bet I can transform it into
> a message-passing litmus test that is also undecidable.  ;-)

We'd like to say: No program encounters OOTA,
as one of the base safety guarantees mentioned below
(for Java, but related concerns arise for C/C++).
Maybe we cannot quite say this, but instead something close
that is equivalent for all practical purposes.

Your N4216 is helpful in making concrete Hans's long-standing
concerns about defining "dependency": Not only do concurrent
semantics become intertwined with the base specifications
of every operation available in a language, they also
encounter computational undecidability problems when
trying to establish simple properties. But at this point,
I don't see how to avoid these problems: We are
pretty sure we need (rf U dep) to be acyclic to arrive
at anomaly-free models. (Or, as Hans suggested, punting,
which no one else seems to like.)

So, I agree that OOTA (and related speculations) concerns
are overblown in that they don't impact any practical
programs, but I still don't know how to make further
progress on core memory models without somehow addressing


> 							Thanx, Paul
>>> - The argument for providing well-defined behavior for data races in Java
>>> was based either on the original model of Java, or on the desire to
>>> unconditionally maintain secondary properties, like memory safety, even in
>>> the presence of data races.
>> Yes. No-OOTA is only one part of the minimal guarantees of even
>> arbitrarily racy Java programs.  This set of predicates forms the Java
>> analog of C/C++ undefinedness.  Informally, we (in conjunction with
>> other part of the JLS) need to ensure that nothing anyone can write in
>> Java proper causes a JVM to crash (with disclaimers about JNI etc that
>> from a security point of view arbitrarily widen the trusted computing
>> base.  BTW, this is the main reason we need to stop making people use
>> Unsafe to get the effects of enhanced-volatiles.)  As I sketched out
>> last January (mostly on the pre-jmm9 list), we ought to solidify
>> meanings of memory safety and security -- these are at least as
>> important as guaranteeing SC for DRF programs. And from there, we need
>> simple characterizations of at least release->acquire and create->use
>> (constructors).  But I don't think we can simply characterize all the
>> intermediate points. It seems impossible to routinely distinguish
>> subtly correct vs subtly incorrect code. For example you need to know
>> that the String hash function is idempotent, and that if hash is zero,
>> it intentionally recomputes, to see that racy lazy initialization of
>> String.hashCode is OK.
>>> - We're essentially already making the preceding dubious assumption.  We
>>> have all sorts of library APIs that don't specify how they behave in the
>>> presence of races, but only say that you shouldn't do that.  We're somehow
>>> supposed to conclude that the heap can't be corrupted even in the presence
>>> of racy library calls, because of the (unstated?) assumption that libraries
>>> should behave as though they were implemented in Java, and Java programs
>>> preserve e.g. memory safety.
>> To emphasize: I agree.
>>> - Switch to a much more C++-like (or Java library-like) model in which data
>>> races have something like "undefined behavior".  Exactly how to model that
>>> is an open question.  Ordinary loads can only see stores that happen-before
>>> them, but racing loads trigger "undefined behavior". "Undefined behavior"
>>> should be defined to allow reporting an error and produce any type correct
>>> answer for the racing load.
>> And to emphasize: I disagree in the general case because I don't
>> know how to do it. However, I don't think there is anything stopping
>> JVM+core-libraries conspiring to throw exceptions in particular
>> kinds of races that are never correct. Or for add-on tools to
>> dynamically or statically detect them.
>>> - [Probably challenging, others understand the Java constraints better than
>>> I] Introduce a mechanism for (nearly) unordered memory-order-relaxed like
>>> racing loads.  Require current racing accesses to use that mechanism
>>> instead.  (Open question: coherence)
>> I have yet to see a compelling example/case for guaranteeing coherence
>> for relaxed accesses to volatiles (i.e., in Java 9+, the
>> enhanced-volatile mechanism for a relaxed read.)
>> -Doug

More information about the jmm-dev mailing list