Primitive type patterns - an alternative approach (JEP 507)

Stephen Colebourne scolebourne at joda.org
Wed Oct 15 23:22:12 UTC 2025


On Wed, 15 Oct 2025 at 16:39, Brian Goetz <brian.goetz at oracle.com> wrote:
> Let's see how I did ... pretty close!  You wanted to go _even more explicit_ than (b) -- by explicitly naming both types

Only one type is named in most cases - the FromType is optional. It
would be perfectly possible to implement the proposal without the
FromType part, but there are use cases where it comes in handy.

> Zooming out, design almost always involves "lump vs split" choices; do we highlight the specific differences between cases, or their commonality?

Another way to express this distinction is "what level of magic is acceptable?"

> For those who didn't go and read JLS 5, here's the set of conversions that are permitted in a casting context:

Although there are 15 different conversions listed, there are only 4
basic things (the rest is language specification noise and some
historic oddities):
- widening, which no-one worries about
- boxing/unboxing, which is a convenience
- type checks, which can throw CCE
- type conversion, which can silently fail

Type checks only ever have one definition - there is never any
ambiguity about whether A is a subtype of B. By contrast, type
conversion is an order of magnitude more complex

Given `var d = Decimal.of("42.5")`, what should `(int) d` return?
* 42 because it truncates
* 42 because rounds half-down (or floor, or half-even)
* 43 because it rounds half-up (or ceiling, or up)
* throw because it is a lossy conversion
* a compile error

The compile error option is not at all unreasonable - why should the
language pick which of the 8 rounding modes is used? Maybe developers
should be forced to use a method to convert, where the mode can be
specified.

My proposal was not that extreme, because it does allow a default
answer (throw if lossy), but argues that it needs to be called out.

Circling back to "what level of magic is acceptable?". The trouble
here is that partial type patterns and unconditional type patterns
already share the same syntax, and that is bad enough. To add in type
conversions is just way too far. This isn't lumping, it is magic.

Trying to read and decipher code with merged type checks and type
conversions in patterns simply isn't possible without an excessive
amount of external context, which is potentially very difficult to do
in PRs for example.

All my proposal really argues is that alternative syntaxes are
available that make the code readable again. With ~ the visible syntax
question becomes "if I can convert to an int ....". Other options are
available.


> But, this is nothing new in Java!  This happens with overloading:
>     m(x)

Method overloading is usually done to enable type conversion (ironic,
huh?). And it is rarely confusing because the overloads represent the
same thing precisely because they *are* type conversion. With method
overloading, the feature is about unification - bringing different
types together to a single code path. With patterns, overloads perform
the opposite role of routing to different code paths. That is why it
is much, much more important to know which branch is being taken in a
switch than which method overload is being used.

Stephen


More information about the amber-dev mailing list