Some thoughts and suggestions regarding last month's post on Deconstruction Patterns

David Alayachew davidalayachew at gmail.com
Sat Apr 8 22:47:36 UTC 2023


Hello,

Thank you for the response!


> It’s an overstatement to say “no constraints” but the
> reality is, we can’t constrain away all the things people
> could do wrong. So yes, there is a risk that people will
> not follow the rules and write bad deconstruction
> patterns.
>
> Note that we have this problem already with record
> patterns! Consider
>
>     record Foo(int x) {
>         Int x() { return 42; }
>     }
>
> or
>
>     record Bar(int x) {
>         Int x() { throw new RuntimeException(); }
>     }
>
> The first merely deconstructs the record wrong; the
> latter “poisons” any pattern match on it.

Apologies, I could have definitely used better words than "no constraints".
And like you said, there also exists some of the same vulnerabilities
within record patterns, such as badly written getters poisoning the
patterns.

That said, the record gives you a deconstructor as a freebie. If I do
nothing but model my state correctly in a record, then I get a
deconstructor that is guaranteed to work. I'm not saying that to say, I
want a deconstructor for free for plain classes too. I say that to explain
why I hesitate to be as trigger happy as I was with record patterns, but
for plain class deconstruction patterns instead.

In fact, looking back at the past few months, I'm noticing my mindset
starting to shift to using records and enums by default, only going to
classes when necessary or best to. Records are becoming more and more
potent to me as things move forward. I feel like I did not appreciate their
value when I first saw them, so much so that now, I don't like the prospect
of going back to a class when I need to.

Lol, I guess a more accurate way of wording my original post would have
been "I'm scared to be without my freebie safety net, so here are some
ideas that might help me feel a bit better," childish as it may sound.

> We can try to eliminate the most egregious errors (e.g.,
> disallow “throw” statement at the top level)

Oh, that's a really good idea.

And the reason why we can do that is because the goal of decomposition is
to break out the parts we want from the object, then do something
meaningful with them. If we don't like what we see in one of the parts, we
should throw that exception AFTER we broke out the part we want, ran it
against some check, and then decided that it is worth throwing an exception
for. Not during the decomposition itself.

Am I understanding that right?

> but this is obviously only a bandaid since it can easily
> be laundered through a method call. The bottom line is
> that deconstructors and records do have a restricted
> programming model, and it is on users to code them
> correctly.

I see that now. Ultimately, this gift has a price tag and we need to pay,
regardless of what support the language provides for us.

> Note we had a very similar conversation when we did
> streams. Given a stream pipeline:
>
>     ints.stream()
>          .map(x -> { lastX = x; return x + lastX; })
>          ...
>
> If you try to run this in parallel you will not get the
> answer you expected.
>
> Tl:dr; pervasive mutability means we can’t stop people
> from shooting their feet, we can only guide them to a
> better programming model and hope they follow.

I think I get what you are saying. And if so, then it makes good sense to
me and I agree.

The above code is illegal now because you all decided not to allow any
outside object into the lambda that wasn't effectively final. However,
using some indirection (encapsulate lastX into an object and replace the
operators with getters and setters), I can end up with essentially the same
problem as above. So, you all put the check on the first level (only allow
effectively final objects into the lambda), and leave the rest into the
hands of the coders because it really is their responsibility to write good
code in the first place - you can only help so much.

Am I understanding that right?

> Yes, composition is powerful, but it magnifies the risk
> of poison-in, poison-out.
>
> I get your concern, though I think “exponentially” is a
> bit hyperbolic.

Fair, quantifying it isn't feasible anyways. And regardless, my fears are
mostly dissuaded since you then said the following.

> > One suggestion would be the ability to create
> > deconstructors that delegate some of their work to
> > other deconstructors.
>
> That’s already in the plan; constructors delegate to
> super-constructors, and deconstructors are the dual of
> constructors.
>
> > Another suggestion would be to create some concept of a
> > "final" for the fields that we are assigning stuff to.
> > Being able to force myself to assign something a value,
> > but also only assign it once, is a powerful construct
> > that already helps me when writing normal code, so I
> > know that it can really help to avoid making some of
> > the same mistakes here and limit the errors in my
> > decomposition.
>
> It depends on the exact expression of the deconstructor
> body (and I don’t want to dive into syntax now), but yes,
> one rational model here is to treat bindings as blank
> finals, and require they be definitely assigned before
> exit.  Then we don’t need to “create some concept of
> final”, because we can use the finality we already have.

Happy to hear that these are viable options.

And thanks for explaining how finality might be done for the bindings of
deconstruction patterns. It's nice when the tools we have can be used in
more places than you expect. It's one of the things I like about Java.

That said, to consider the needs of other developers, it might be nice if
that finality was something that we could opt in to. Or even better, opt
out of.

> > Yet another suggestion would be something a little more
> > controversial (not to mention that this ship may have
> > already sailed) -- I'd like the user of the
> > deconstruction pattern to be able to opt-in to forcing
> > their binding variables to CONTAIN the same identifiers
> > that were used at the deconstructing method's
> > signature.
>
> Unfortunately, this is just unworkable. Suppose you could
> this on a record:
>
>     record R(@ForcedUseSiteNameMatchDammit int x) { }
>
> Now, if we construct
>
>     record Pair(R r1, R r2) { … }
>
> Then no one can use pattern matching on Pair:
>
>     case Pair(R(int x), R(int x))
>          // duplicate variable names
>
> Now, you said “contain”, so perhaps you meant something like
>
>     Case Pair(R(int x1), R(int x2))
>
> But I promise you, no one will thank you for this degree of “do it
because I think it is good for you.”

All fair points, especially the ones after the "contain". A lot of this
came out of fears that have since been addressed and dealt with. Not to
mention I now agree with you.

> > Being able to know the field names
>
> Not all patterns will just be unpacking fields.

Oh right, that is true. It's sometimes hard to remember that I am not just
decomposing objects into their components, but I am now decomposing the
objects into whatever legal representation of their state should be. I
could decompose a BigFraction (composed of a numerator BigInteger and a
denominator BigInteger) into a BigDecimal, which would represent the
decimal form of the fraction. Or even just to 2 instances of
java.lang.Long, for the numerator and denominator.

That also helps me better understand what we are getting with this
open-endedness. There's a lot of flexible ways that we can decompose
objects, which is why we have to give up the simplicity in order to get
that flexibility.

Thank you for helping me out!
David Alayachew
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/amber-dev/attachments/20230408/48017319/attachment.htm>


More information about the amber-dev mailing list