Draft Spec for Fourth Preview of Pattern Matching for Switch (JEP 433) and Second Preview of Record Patterns (JEP 432) now available
Dan Smith
daniel.smith at oracle.com
Thu Nov 3 16:02:57 UTC 2022
> On Nov 1, 2022, at 12:58 PM, Brian Goetz <brian.goetz at oracle.com> wrote:
>
>> 14.11.2: Design suggestion: rename "enhanced switch" to "pattern switch", define it as only those switches that make use of patterns, and don't worry about the remaining "switch with new features" corner cases. It's just such an important concept that I think there's a benefit to making the distinction really clean and obvious. E.g., asking someone new to Java to memorize the ad hoc set of types that don't demand exhaustiveness seems unhelpfully complicated.
>>
>> (Corner cases I'm thinking about: want to use a null constant but not be exhaustive? Fine. Want to have an Object input type but an empty switch body? Pointless, but fine. Etc.)
>
> I think the motivation here is that we want to minimize the surface area of non-exhaustive switches, by quarantining the "need not be exhaustive" to those that would have compiled under Java 8. "Switches must be exhaustive, except for statement switches over T1..Tn with all constant labels."
Minimal surface area is good, but that can be at odds with minimal "perimeter"—that is, how hard is it to draw the line distinguishing the old style from the new style? I like making it about patterns because they're such an obvious departure from what came before, and what that gains in reduced perimeter seems to justify a small sacrifice of surface area.
>> 14.14.2: I'm sure there's been some discussion about this already, but the use of MatchException for nulls in 'for' loops but NPE for nulls in switches also seems like a sad historical wart. What's wrong with an NPE from a 'for' loop?
>
> I think if the RHS of the foreach loop is null, we should NPE (as before.) But if one of the _elements_ of the RHS array/iterable is null, then we should ME on the record pattern. (Otherwise we have a sharp edge between a top-level record pattern and a nested record pattern.)
Eventually, I guess the rewrite will be expressed as a 'let' expression (using whatever syntax we settle on):
let Record(var x, var y) = iterator.next();
The right thing to do in for-each is thus whatever the right thing is to do in a 'let' statement.
I'm kind of torn, because it seems straightforward to say this is an NPE (deconstructing a record is not that different from doing a field access). But it's true that a nested record pattern in a switch will cause a MatchException when given a null component value. Meanwhile, a top-level record pattern in a switch will cause an NPE. Hmm. No good way to make all of that consistent...
>> 14.30.1, 14.30.2: I'm not sold on *any patterns*, *resolved patterns*, and *executable switch blocks*.
>>
>> The semantics are fine—some type patterns will match null, others will not. But it seems to me that this can be a property of the type pattern, one of many properties of programs that we determine at compile time. No need to frame it as rewriting a program into something else. (Compare the handling of the '+' operator. We don't rewrite to a non-denotable "concatenation expression".)
>>
>> Concretely:
>> - The pattern matching runtime rules can just say "the null reference matches a type pattern if the type pattern is unconditional".
>> - We can make it a little more clear that a type pattern is determined to be unconditional, or not, based on its context-dependent match type (is that what we call it?)
>>
>> For a *compiler*, it will be useful to come up with an encoding that preserves the compile-time "unconditional" property in bytecode. But that's a compiler problem, not something JLS needs to comment on.
>
> It is a property of the pattern *and the type being matched*. We tried writing it the other way and it was not necessarily better....
Yes, it's a property of the pattern derived from enclosing context—but this is nothing new. Like, the interpretation of names is heavily dependent on enclosing context.
In any case, it's just a spec presentation question, so I suppose I can discuss further with Gavin and Alex and see what conclusion we come to.
>> 14.30.3: A record pattern can't match null, but for the purpose of dominance, it's not clear to me why a record pattern can't be considered unconditional, and thus dominate the equivalent type pattern or a 'default'.
>
> Unconditional means "no runtime checks"; it's like a static cast for which we don't emit a checkcast. Exhaustive means "satisfies the type system, but might have bad values" (null, novel enum constants, novel subtypes, at any level in the tree.)
"Unconditional" may be the wrong word, then, but my real complaint here is that I think a record pattern should be able to dominate an unconditional type pattern or a 'default' label.
More information about the amber-spec-observers
mailing list