Java memory model question
David Holmes
david.holmes at oracle.com
Sat Mar 7 13:30:25 UTC 2020
On 6/03/2020 8:27 pm, Luke Hutchison wrote:
> On Fri, Mar 6, 2020 at 12:46 AM Brian Goetz <brian.goetz at oracle.com> wrote:
>
>> No, but this is a common myth. Method boundaries are not part of the
>> JMM, and the JIT regularly makes optimizations that have the effect of
>> reordering operations across method boundaries.
>>
>
> Thanks. That's pretty interesting, but I can't think of an optimization
> that would have that effect. Can you give an example?
>
> On Thu, Mar 5, 2020 at 7:09 PM David Holmes <david.holmes at oracle.com> wrote:
>
>> Probably a question better asked on concurrency-interest at cs.oswego.edu
>
>
> Thanks, I didn't know about that list.
>
>> can the termination of the
>>> stream be seen as a memory ordering barrier (in a weak sense)?
>>
>> I would have expected this to be explicitly stated somewhere in the
>> streams documentation, but I don't see it. My expectation is that
>> terminal operations would act as synchronization points.
>>
>
> Right, although I wasn't asking about "high-level concurrency" (i.e.
> coordination between threads), but rather "low-level concurrency" (memory
> operation ordering).
I hope you see now that is an artificial distinction. When it comes to
the JMM the only thing that counts is happens-before, and the
synchronization actions that allow you to reason about that.
> The question arises from the Java limitation that
> fields can be marked volatile, but if the field is of array type, then the
> individual elements of the array cannot be marked volatile. There's no
"volatile" is irrelevant in the context you described. The correctness
neither depends on, nor is achieved by the actual writing of the array
elements. As has been outlined by others the higher-level semantics
ensure that the array stores happen-before the worker threads indicate
they are done, which happens-before the main thread can proceed to
access them.
David
-----
> "element-wise volatile" array unless you resort to using an
> AtomicReferenceArray, which creates a wrapper object per array element,
> which is wasteful on computation and space.
>
> I understand that the lack of "element-wise volatile" arrays means that
> threads can end up reading stale values if two or more threads are reading
> from and writing to the same array elements. However for this example, I
> specifically exclude that issue by ensuring that there's only ever either
> zero readers / one writer, or any number of readers / zero writers (every
> array element is only written once by any thread, then after the end of the
> stream, there are zero writers).
>
> I'm really just asking if there is some "macro-scale memory operation
> reordering" that could somehow occur across the synchronization barrier at
> the end of the stream. I don't know how deep the rabbit hole of memory
> operation reordering goes.
>
> I have to assume this is not the case, because the worker threads should
> all go quiescent at the end of the stream, so should have flushed their
> values out to at least L1 cache, and the CPU should ensure cache coherency
> between all cores beyond that point. But I want to make sure that can be
> guaranteed.
>
> In practice I have never seen this pattern fail, and it's exceptionally
> useful to be able to write to disjoint array elements from an
> IntStream.range(0, N) parallel stream, particularly as a pattern to very
> quickly parallelize orignially-serial code to have maximum efficiency, by
> simply replacing for loops that have no dependencies between operations
> with parallel streams -- but I have been nervous to use this pattern since
> I realized that arrays cannot have volatile elements. Logically my brain
> tells me the fear is unfounded, but I wanted to double check.
>
More information about the jdk-dev
mailing list