Primitive type patterns - an alternative approach (JEP 507)

Gavin Bierman gavin.bierman at oracle.com
Thu Oct 16 17:29:03 UTC 2025


Hi Stephen,

This is a nice document and discussion - thanks.

To understand where JEP 507 is coming from, I think it might be useful to consider a slightly different mental model; let me call it "conversions world".

The fundamental thing we are dealing with is situations when we want a value of type A, and we have an expression of type B. In conversions world that's simple, we simply apply the "B to A conversion" to the expression. Where do we do this? In Java, in lots of places - in argument positions of method calls, in assignment expressions, in casts, in pattern matching, in numerical operations, ... To confuse things, Java has different conversions for each different use - that's a bit odd (C# is much simpler in this respect, for example) - but that's the world we have inherited. 

So, just to make it concrete, 

T t = e;

We know that e has static type S. So we figure out a conversion from S to T - let's call it C - and then the compiler bakes it in:

T t = C[e];

(I'm going to use square brackets to mean "I have applied the conversion C to the expression e". Note also that if we couldn't figure out a conversion from S to T, it is a compile-time error!)

In conversions world, we do this ALL THE TIME, EVERYWHERE. For example:

String s = ...;
Object o = s; ---> Object o = String-to-Object[s]

(What is the String-to-Object conversion? Operationally it's the identity function! We can obviously optimise...)

But it also works here too:

int i = ...;
...(byte) i... -->. ...int-to-byte[i]

We have lots of these conversion functions in Java, just look at Chapter 5! As you rightly observe, *today* they have the following property:

- The conversions between reference types are essentially functions that are less than the identity, i.e. they either return the object that have been given or they throw.
- The conversions between primitive types are quite different in that many of them actually change the representation; e.g. they take an 8 bit value and return a 32 bit one.

What you suggesting, I believe, is to cast this difference in stone *and* make it concrete in syntax. 

Unfortunately, I think that is a very serious restriction. We may in the future want to define conversions between reference types that *do* change the representation, e.g. think of a conversion from one value class to another (that is not related by subclassing). We may want to define that conversion using type classes. 

So this future world is a world of generalized conversions. Users can write conversions perhaps. Maybe we can even get rid of these magical Foo-to-Bar[-] conversions, and write them in a type class somewhere. But it is just a slight generalization of the conversions world we have today. Conversions are, both today and in the future, always defined with respect to their static types. When you write: (Foo)e : you need to know the static type of e to figure out which conversion to type Foo the compiler will insert. Undoubtedly, Tagir will give us a wonderful IDE experience so you can figure out the conversion :-) If it's one you've written, I'm sure the declaration will be a click away. But the mental model is that it is just a lump of code that converts a value from one type to another. Everything is a conversion.

Hope that helps.

Gavin

PS: There is extensive academic work in this area. "Subtyping as coercions" is a formal model where a type A is a subtype of type B if there's an implicit coercion function c from A to B. (That's actually the way the JLS views things, if you squint.) I think I learnt this from a bunch of papers by Zhaohui Luo from the mid-1990s. This approach scales to all sorts of crazy powerful type theories, and provides a powerful framework allowing all sorts of important program rewriting scenarios can be recast as type-directed coercion insertion. Recommended reading if you have the time.

> On 15 Oct 2025, at 07:34, Stephen Colebourne <scolebourne at joda.org> wrote:
> 
> In the vein of JEP feedback, I believe it makes sense to support
> primitive types in pattern matching, and will make sense to support
> value types in the future. And I can see the great work that has been
> done so far to enable this.
> 
> Unfortunately, I hate the proposed syntactic approach in JEP 507. It
> wasn't really clear to me as to *why* I hated the syntax until I had
> enough time to really think through what Java does in the area of
> primitive type casts, and why extending that as-is to pattern matching
> would IMO be a huge mistake.
> 
> (Please note that I fully grasp the pedagogical approach wrt
> instanceof defending an unsafe cast, but no matter how much it is
> repeated, I don't buy it, and I don't believe it is good enough by
> itself.)
> 
> To capture my thoughts, I've written up how Java's current approach to
> casts leads me to an alternative proposal - type conversion casts, and
> type conversion patterns:
> https://tinyurl.com/typeconvertjava1
> 
> thanks
> Stephen



More information about the amber-dev mailing list