Opting into totality
Brian Goetz
brian.goetz at oracle.com
Tue Aug 25 18:08:50 UTC 2020
> I have thought about what you have just said; (a) I agree, and (b) have made me realize that I have unnecessarily and unreasonably conflated the o.t. issue with the fallthrough issue. So let me try once more. For context, here are the three important paragraphs again, and the only changes I have made are to remove from the third paragraph the sentences that address fallthrough and fallout:
While we're doing a conflation inventory, here's another candidate
(which I will soon reject after drawing the distinction): totality
checking vs automatic residue handling. The interesting cases here all
involve getting the compiler to do TWO categories of extra work for the
user that is not done on old-school (partial, statement) switches:
- Validating that the cases are (optimistically) total, or giving a
compile error
- Inserting synthetic cases to throw on the residue of optimistic totality
I'm not seriously suggesting that we should treat them as separate
features, as much as observing that we may still only be circling the
target. In thinking about it this morning, I initially thought that a
better axis for dividing the problem would be on the "shape" of the
switch; some switches are "tree shaped" (type-hierarchy-based switches
which lean hard on partial ordering of cases, usually culminating in
catch-alls) and some are "star shaped" (all of the traditional switch
examples, plus many switches over sealed types.) I initially thought
that `switch case` was mostly about the latter shape, but then I
realized this is a false distinction; both produce residue, even when no
sealing or enums are involved, as a side-effect of the fact that we want
to not force users to spell out all the silly cases.
The pervasiveness of o.t., and the complex shape of residue that comes
out of even well-behaved switches, says to me the place worth spending
complexity budget on is not totality, but _optimistic totality with
residue_. (Strong totality is just residue = empty; weak totality is
just residue = { null }, so collapsing these all may make sense.)
Because, the set of switches that are truly total with no optimism and
no residue is pretty simple, and probably doesn't need an awful lot of
help. (For comparison, we don't need the language to help us assert
than an if ... else chain is total; if the last entry is an "else",
we're there.) So I'm happy collapsing these two sub-features, but I
want to keep the focus on the optimistic/residue part, because that's
where the value is.
We already have a 2x2 matrix of orthogonal choices that we started with:
{ statement, expression } x { arrow, colon }, and we're not going to
pare back that space. So let's look at the other dimension, which
you've currently got as { "single point of totality", optimistic
totality, partiality }.
Given the lack of clear boundary between the tree-shaped and star-shaped
switches with respect to their residue, I'm not sure the distinction
between SPoT and optimistic totality carries its weight, and we can
collapse these into "X-total vs partial", where X-total means both
totality checking and automatic residue rejection. The interesting part
is validating that the programmer has contributed enough totality that
the compiler can fill in the gaps -- let's highlight that.
To pick a different syntax for purposes of exposition, suppose we had
`total-switch` variant of `switch`. A `total-switch` would require that
(a) the cases be at least optimistically total and (b) would insert
additional throwing cases to patch any unhandled residue. And we could
do this with any of the combinations that lead to X-totality -- a
default clause (strongly total), a weakly total deconstruction pattern
(residue = null), an optimistically total set of type patterns for
sealed types (residue = { null, novel }), and all the more complicated
ones -- distinguishing between these cases doesn't seem interesting. In
this model, `default` just becomes a shorthand for "everything else,
maybe with destructuring."
Which leaves:
{ statement, expression } x { arrow, colon } x { switch, total-switch }
where an expression switch of any stripe is automatically upgraded to a
total-switch of that type. I think this is still orthogonal (with the
above caveat) since the fallthrough rules apply uniformly, and as
mentioned above we can probably rehabilitate `default` so it can remain
relevant in all the boxes. (There are still some details to work out
here, notably what to do about weakly total deconstruction patterns.)
The asymmetry where expression switches default to total, but statement
switches default to partial is unfortunate, but we knew all along that
was the necessary cost of rehabilitating switch rather than creating a
parallel "snitch" (New Switch) construct. It's just the final payment is
coming due now.
I would "strongly" prefer to not propagate `default` into patterns, but
leave it as a behavior of switch. I think our refined taxonomy of
totality is now good enough to get us where we need to go without
festooning pattern nesting with nullity knobs. (I think that, if we
have to include totality markers at every stage of the pattern syntax,
we will have made a mistake somewhere else; as I mentioned to Remi when
he brought up "var vs any" (which is just another spelling for default
vs nothing), my objection is not to the syntax but to the amount of
complexity budget it burns for a low-value thing -- raising
ideally-ignorable corner cases into the user's direct field of view,
when ideally we can define the natural positive space and let the rest
fall into automatically handled residue. If we have defined the pattern
semantics correctly, I'm not sure anyone will notice.)
Which (if you buy all the above) leaves us with a bikeshed to paint on
how to spell `total-switch` or `switch-case` or ... ?
More information about the amber-spec-experts
mailing list