From kfogel at dawsoncollege.qc.ca Wed Oct 1 19:51:10 2025 From: kfogel at dawsoncollege.qc.ca (Kenneth Fogel) Date: Wed, 1 Oct 2025 19:51:10 +0000 Subject: Packages and Compact Source File Message-ID: [May have sent this already but to the wrong mailing list] I just wish to confirm that a compact source file cannot have a package statement but can have an import statement. Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Wed Oct 1 20:04:02 2025 From: forax at univ-mlv.fr (Remi Forax) Date: Wed, 1 Oct 2025 22:04:02 +0200 (CEST) Subject: Packages and Compact Source File In-Reply-To: References: Message-ID: <1233574522.4789121.1759349042932.JavaMail.zimbra@univ-eiffel.fr> > From: "Kenneth Fogel" > To: "amber-dev" > Sent: Wednesday, October 1, 2025 9:51:10 PM > Subject: Packages and Compact Source File > [May have sent this already but to the wrong mailing list] > I just wish to confirm that a compact source file cannot have a package > statement but can have an import statement. Confimed ! import, import static or import module are all available. > Ken regards, R?mi -------------- next part -------------- An HTML attachment was scrubbed... URL: From kfogel at dawsoncollege.qc.ca Wed Oct 1 20:05:34 2025 From: kfogel at dawsoncollege.qc.ca (Kenneth Fogel) Date: Wed, 1 Oct 2025 20:05:34 +0000 Subject: Packages and Compact Source File In-Reply-To: <1233574522.4789121.1759349042932.JavaMail.zimbra@univ-eiffel.fr> References: <1233574522.4789121.1759349042932.JavaMail.zimbra@univ-eiffel.fr> Message-ID: Thank you. ________________________________ From: Remi Forax Sent: Wednesday, October 1, 2025 4:04:02 PM To: Kenneth Fogel Cc: amber-dev Subject: Re: Packages and Compact Source File ________________________________ From: "Kenneth Fogel" To: "amber-dev" Sent: Wednesday, October 1, 2025 9:51:10 PM Subject: Packages and Compact Source File [May have sent this already but to the wrong mailing list] I just wish to confirm that a compact source file cannot have a package statement but can have an import statement. Confimed ! import, import static or import module are all available. Ken regards, R?mi -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Fri Oct 10 21:07:23 2025 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Fri, 10 Oct 2025 16:07:23 -0500 Subject: Ad hoc type restriction Message-ID: When I read the draft JEP "Null-Restricted and Nullable Types" a few things really clicked in my head. The first is that when reviewing Java code, it is very common to need to convince oneself that some variable != null in order to understand the code or prove its correctness. So the addition of a "!" right in front of its declaration will be a huge, everyday win. I look forward to adding this feature to the growing list of other "How can the compiler help me prove this code is correct?" features like: generics, exhaustive switch cases, sealed types, etc. But null-restriction is just one specific example of a more general concept, let's call it "ad hoc type restriction". A whole lot of Java code is written using a strategy of "I have some value which can be modeled by type X, but only a subset Y of values of type X are valid for my particular use case, so I'm going to pass these values around as parameters of type X and then either validate them everywhere one can enter from the outside world... which, um, requires me to keep track of which values have come from the outside world and which have not, hmm..." A null-restricted reference type is just the most common example of this - where the subset Y = X \ { null }. Other examples... - Using int for: The size of some collection (can't be negative) - Using byte or char for: An ASCII character - Using short for: A Unicode basic plane character (2FE0..2FEF are unassigned) - Using String for: SQL query, phone number, SSN, etc. (must be non-null and have the proper syntax) But regardless of what X or Y is, it's very common for the validation step to be missed or forgotten somewhere, leading to bugs. One might even argue that a *majority* of bugs are due to incomplete validation of some sort or another. Keeping track of when validation is required and manually adding it in all those places is tedious and error prone. OK, how could the compiler help me? I am starting with some type T, and I want to derive from it some new type R which only permits a subset of T's values. I want the compiler to guarantee this without too much performance penalty. I want R to be arbitrary - however I define it. Today I can "homebrew" type restriction on reference types by subclassing or wrapping T, for example: public class PhoneNumber { private String e164; // non-null and in E.164 format, e.g., "+15105551212" public PhoneNumber(String s) { ... parse, validate E.164 syntax, normalize, ... } public String toString() { return this.e164; } } But doing that is not ideal because: 1. This doesn't work for primitive types (e.g., a person's age, grade point average, number of children, etc.) 2. A wrapper class loses all of the functionality that comes with the wrapped class. For example, I can't say pn1.startsWith(pn2) even though both values are essentially just Strings. 3. There is performance overhead [Side note - in Valhalla at least problem #3 goes away if the wrapper class is a value class?] So I pose this question to the group: How could the language and compiler take the null-restricted types idea and fully generalize it so that: - It's easy to define custom type restrictions on both primitive and reference types - Runtime performance is "as good as casting", i.e., validation only occurs when casting Here's one idea just for concreteness: Use special type restriction annotations, e.g.: @TypeRestriction(baseType = String.class, enforcer = PhoneNumberEnforcer.class) public @interface PhoneNumber { boolean requireNorthAmerican() default false; } public class PhoneNumberEnforcer implements TypeRestrictionEnforcer { @Override public void validate(PhoneNumber restriction, String value) { if (value == null || !value.matches("\+[1-9][2-9][0-9]{6,14}")) throw new InvalidPhoneNumberException("not in E.164 format"); if (restriction.requireNorthAmerican() && !value.charAt(0) != '1') throw new InvalidPhoneNumberException("North American number required"); } } // Some random API... public void dial(@PhoneNumber String number) { ... } // Some random code... String input = getUserInput(); try { dialer.dial((@PhoneNumber String)input); } catch (InvalidPhoneNumberException e) { System.err.println("Invalid phone number: \"" + input + "\""); } String input2 = getUserInput(); dialer.dial(input2); // "warning: implicit cast to @PhoneNumber String" ? The compiler would insert bytecode or method call-outs at the appropriate points to guarantee that any variable with type @PhoneNumber String would always contain a value that has successfully survived PhoneNumberEnforcer.validate(). In other words, it would provide the same level of guarantee as String! does, but it would be checking my custom type constraint instead. The Checker Framework does something like the above, but it has limitations due to being a 3rd party tool, it's trying to solve a more general problem, and personally I haven't seen it in widespread use (is it just me?) Thoughts? -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Fri Oct 10 21:33:07 2025 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Fri, 10 Oct 2025 16:33:07 -0500 Subject: Ad hoc type restriction In-Reply-To: References: Message-ID: > > This doesn't work for primitive types (e.g., a person's age, grade point > average, number of children, etc.) Oops, I meant to say that subclassing doesn't work for primitive types. Obviously, using wrapper classes does. -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Fri Oct 10 22:27:37 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Fri, 10 Oct 2025 18:27:37 -0400 Subject: Ad hoc type restriction In-Reply-To: References: Message-ID: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> There is a literature on "restriction types" or "refinement types" that describes what you are appealing to.? It is most commonly used in functional languages to describe restrictions on construction that otherwise would not affect the representation, such as "positive integer" as a restriction of Integer or "sorted list" as a restriction of List.? (This is a good match for functional languages because (a) if you avoid putting a bad value into a variable in the first place, you don't have to worry about that changing, and (b) there is no extension, so once you restrict the value of a variable the functions that operate on the underlying type just work on the refined type.) Liquid Haskell (https://en.wikipedia.org/wiki/Liquid_Haskell, https://ucsd-progsys.github.io/liquidhaskell/) is an experimental variant of Haskell that supports refinement types, in aid of verifiable correctness. Clojure Spec also can be thought of as a refinement type system, allowing you to use linguistic predicates to overlay optional constraints over an otherwise mostly-dynamically-typed system, such as "the value associated with the map key `age` is a non-negative integer." The bad news is that proving type safety when restrictions can contain arbitrary predicative logic is ... hard.? Liquid Haskell, and other similar systems, often must appeal to SMT/SAT solvers to type-check a program (which is to say that we should not be surprised to find NP-complete problems arising as part of type checking.) Object-oriented languages like Java would likely get to refinement types through wrapping, which has its own set of limitations.? If the type you are refining is an interface, you're fine, but if it is a final class (like all value classes!) you obviously have a problem, because you can't interoperate easily between Integer and RefinedInteger (though this problem is likely solveable for values.) > But doing that is not ideal because: > > 1. This doesn't work for primitive types (e.g., a person's age, grade > point average, number of children, etc.) > 2. A wrapper class loses all of the functionality that comes with the > wrapped?class. For example, I can't saypn1.startsWith(pn2) even > though both values are essentially just Strings. > 3. There is performance overhead > (2) is even worse than it sounds: I can't use a WrappedString where a String is expected, even if I manually lifted all the String methods onto it. > Thoughts? I think the best bet for making this usable would be some mechanism like a "view", likely only on value types, that would erase down to the underlying wrapped type, but interpose yourself on construction, and provided a conversion from T to RefinedT that verified the requirement.? But this is both nontrivial and presumes a lot of stuff we don't even have yet.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at mccue.dev Sat Oct 11 00:29:12 2025 From: ethan at mccue.dev (Ethan McCue) Date: Fri, 10 Oct 2025 20:29:12 -0400 Subject: Ad hoc type restriction In-Reply-To: References: Message-ID: As with all feature ideas it's worth first asking if it could be implemented by ecosystem tooling instead of being baked into Java. For nullable types the answer was yes until the VM started to need to care about nullability for flatness of memory. The only tweak is that when you wrote void daft(@NonNull String punk) { IO.println(punk.length()); } You were not given enforcement by the VM of this invariant. Even though this method was written in a way that assumes it, reflection or other mechanisms that break out of the static checking world could easily pass a null value. However there is nothing conceptually preventing the tools validating @NonNull usage from also emitting an error until you have inserted a known precheck. void daft(@NonNull String punk) { // You could get an error until you add this Objects.requireNonNull(punk); IO.println(punk.length()); } This is not generally done for nullability because most everything tends to be non-null on analysis and that's a *lot* of Objects.requireNonNull checks. But for other single-value invariants, like your @PhoneNumber example, it seems fairly practical. Especially since, as a general rule, arbitrary cost computations really shouldn't be invisible. How would one know if (@B A) is going to thread invocations of some validation method everywhere? void call(@PhoneNumber String me) { PhoneNumbers.valdate(me); dial(me); } > How could the language and compiler take the null-restricted types idea and fully generalize it This requires a hitherto forbidden incantation to summon the dependent types demon. Upon summoning one must then forge a contract. This contract, if witnessed and notarized by the Sisters of Justice, would be binding. Unfortunately we cannot know the terms of the contract up front, but one must assume it would take a large scale sacrifice. Petitioning nation states might be a good first step, if only to have ready blood enough to spill. On Fri, Oct 10, 2025 at 7:14?PM Archie Cobbs wrote: > When I read the draft JEP "Null-Restricted and Nullable Types" a few > things really clicked in my head. > > The first is that when reviewing Java code, it is very common to need to > convince oneself that some variable != null in order to understand the code > or prove its correctness. So the addition of a "!" right in front of > its declaration will be a huge, everyday win. I look forward to adding this > feature to the growing list of other "How can the compiler help me prove > this code is correct?" features like: generics, exhaustive switch cases, > sealed types, etc. > > But null-restriction is just one specific example of a more general > concept, let's call it "ad hoc type restriction". > > A whole lot of Java code is written using a strategy of "I have some value > which can be modeled by type X, but only a subset Y of values of type X are > valid for my particular use case, so I'm going to pass these values around > as parameters of type X and then either validate them everywhere one can > enter from the outside world... which, um, requires me to keep track of > which values have come from the outside world and which have not, hmm..." > > A null-restricted reference type is just the most common example of this - > where the subset Y = X \ { null }. > > Other examples... > > - Using int for: The size of some collection (can't be negative) > - Using byte or char for: An ASCII character > - Using short for: A Unicode basic plane character (2FE0..2FEF are > unassigned) > - Using String for: SQL query, phone number, SSN, etc. (must be > non-null and have the proper syntax) > > But regardless of what X or Y is, it's very common for the validation step > to be missed or forgotten somewhere, leading to bugs. One might even argue > that a *majority* of bugs are due to incomplete validation of some sort > or another. Keeping track of when validation is required and manually > adding it in all those places is tedious and error prone. > > OK, how could the compiler help me? I am starting with some type T, and I > want to derive from it some new type R which only permits a subset of T's > values. I want the compiler to guarantee this without too much performance > penalty. I want R to be arbitrary - however I define it. > > Today I can "homebrew" type restriction on reference types by subclassing > or wrapping T, for example: > > public class PhoneNumber { > private String e164; // non-null and in E.164 format, e.g., > "+15105551212" > public PhoneNumber(String s) { > ... parse, validate E.164 syntax, normalize, ... > } > public String toString() { > return this.e164; > } > } > > But doing that is not ideal because: > > 1. This doesn't work for primitive types (e.g., a person's age, grade > point average, number of children, etc.) > 2. A wrapper class loses all of the functionality that comes with the > wrapped class. For example, I can't say pn1.startsWith(pn2) even > though both values are essentially just Strings. > 3. There is performance overhead > > [Side note - in Valhalla at least problem #3 goes away if the wrapper > class is a value class?] > > So I pose this question to the group: How could the language and compiler > take the null-restricted types idea and fully generalize it so that: > > - It's easy to define custom type restrictions on both primitive and > reference types > - Runtime performance is "as good as casting", i.e., validation only > occurs when casting > > Here's one idea just for concreteness: Use special type restriction > annotations, e.g.: > > @TypeRestriction(baseType = String.class, enforcer = > PhoneNumberEnforcer.class) > public @interface PhoneNumber { > boolean requireNorthAmerican() default false; > } > > public class PhoneNumberEnforcer implements > TypeRestrictionEnforcer { > @Override > public void validate(PhoneNumber restriction, String value) { > if (value == null || !value.matches("\+[1-9][2-9][0-9]{6,14}")) > throw new InvalidPhoneNumberException("not in E.164 format"); > if (restriction.requireNorthAmerican() && !value.charAt(0) != '1') > throw new InvalidPhoneNumberException("North American number > required"); > } > } > > // Some random API... > public void dial(@PhoneNumber String number) { ... } > > // Some random code... > String input = getUserInput(); > try { > dialer.dial((@PhoneNumber String)input); > } catch (InvalidPhoneNumberException e) { > System.err.println("Invalid phone number: \"" + input + "\""); > } > String input2 = getUserInput(); > dialer.dial(input2); // "warning: implicit cast to @PhoneNumber > String" ? > > The compiler would insert bytecode or method call-outs at the appropriate > points to guarantee that any variable with type @PhoneNumber String would > always contain a value that has successfully survived > PhoneNumberEnforcer.validate(). In other words, it would provide the same > level of guarantee as String! does, but it would be checking my custom > type constraint instead. > > The Checker Framework does something like the above, but it has > limitations due to being a 3rd party tool, it's trying to solve a > more general problem, and personally I haven't seen it in widespread use > (is it just me?) > > Thoughts? > > -Archie > > -- > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at mccue.dev Sat Oct 11 00:35:47 2025 From: ethan at mccue.dev (Ethan McCue) Date: Fri, 10 Oct 2025 20:35:47 -0400 Subject: Ad hoc type restriction In-Reply-To: References: Message-ID: Sorry, it's been a long day. I meant to say Devil. Demons are of course chaotic evil and would not abide by any contract. Devils are lawful evil. On Fri, Oct 10, 2025 at 8:29?PM Ethan McCue wrote: > As with all feature ideas it's worth first asking if it could be > implemented by ecosystem tooling instead of being baked into Java. > > For nullable types the answer was yes until the VM started to need to care > about nullability for flatness of memory. The only tweak is that when you > wrote > > void daft(@NonNull String punk) { > IO.println(punk.length()); > } > > You were not given enforcement by the VM of this invariant. Even though > this method was written in a way that assumes it, reflection or other > mechanisms that break out of the static checking world could easily pass a > null value. > > However there is nothing conceptually preventing the tools > validating @NonNull usage from also emitting an error until you have > inserted a known precheck. > > void daft(@NonNull String punk) { > // You could get an error until you add this > Objects.requireNonNull(punk); > IO.println(punk.length()); > } > > This is not generally done for nullability because most everything tends > to be non-null on analysis and that's a *lot* of Objects.requireNonNull > checks. > > But for other single-value invariants, like your @PhoneNumber example, it > seems fairly practical. Especially since, as a general rule, arbitrary cost > computations really shouldn't be invisible. How would one know if (@B A) is > going to thread invocations of some validation method everywhere? > > void call(@PhoneNumber String me) { > PhoneNumbers.valdate(me); > dial(me); > } > > > How could the language and compiler take the null-restricted types idea > and fully generalize it > > This requires a hitherto forbidden incantation to summon the dependent > types demon. Upon summoning one must then forge a contract. This contract, > if witnessed and notarized by the Sisters of Justice, would be binding. > Unfortunately we cannot know the terms of the contract up front, but one > must assume it would take a large scale sacrifice. Petitioning nation > states might be a good first step, if only to have ready blood enough to > spill. > > > > On Fri, Oct 10, 2025 at 7:14?PM Archie Cobbs > wrote: > >> When I read the draft JEP "Null-Restricted and Nullable Types" a few >> things really clicked in my head. >> >> The first is that when reviewing Java code, it is very common to need to >> convince oneself that some variable != null in order to understand the code >> or prove its correctness. So the addition of a "!" right in front of >> its declaration will be a huge, everyday win. I look forward to adding this >> feature to the growing list of other "How can the compiler help me prove >> this code is correct?" features like: generics, exhaustive switch cases, >> sealed types, etc. >> >> But null-restriction is just one specific example of a more general >> concept, let's call it "ad hoc type restriction". >> >> A whole lot of Java code is written using a strategy of "I have some >> value which can be modeled by type X, but only a subset Y of values of type >> X are valid for my particular use case, so I'm going to pass these values >> around as parameters of type X and then either validate them everywhere one >> can enter from the outside world... which, um, requires me to keep track of >> which values have come from the outside world and which have not, hmm..." >> >> A null-restricted reference type is just the most common example of this >> - where the subset Y = X \ { null }. >> >> Other examples... >> >> - Using int for: The size of some collection (can't be negative) >> - Using byte or char for: An ASCII character >> - Using short for: A Unicode basic plane character (2FE0..2FEF are >> unassigned) >> - Using String for: SQL query, phone number, SSN, etc. (must be >> non-null and have the proper syntax) >> >> But regardless of what X or Y is, it's very common for the validation >> step to be missed or forgotten somewhere, leading to bugs. One might even >> argue that a *majority* of bugs are due to incomplete validation of some >> sort or another. Keeping track of when validation is required and manually >> adding it in all those places is tedious and error prone. >> >> OK, how could the compiler help me? I am starting with some type T, and I >> want to derive from it some new type R which only permits a subset of T's >> values. I want the compiler to guarantee this without too much performance >> penalty. I want R to be arbitrary - however I define it. >> >> Today I can "homebrew" type restriction on reference types by subclassing >> or wrapping T, for example: >> >> public class PhoneNumber { >> private String e164; // non-null and in E.164 format, e.g., >> "+15105551212" >> public PhoneNumber(String s) { >> ... parse, validate E.164 syntax, normalize, ... >> } >> public String toString() { >> return this.e164; >> } >> } >> >> But doing that is not ideal because: >> >> 1. This doesn't work for primitive types (e.g., a person's age, grade >> point average, number of children, etc.) >> 2. A wrapper class loses all of the functionality that comes with the >> wrapped class. For example, I can't say pn1.startsWith(pn2) even >> though both values are essentially just Strings. >> 3. There is performance overhead >> >> [Side note - in Valhalla at least problem #3 goes away if the wrapper >> class is a value class?] >> >> So I pose this question to the group: How could the language and compiler >> take the null-restricted types idea and fully generalize it so that: >> >> - It's easy to define custom type restrictions on both primitive and >> reference types >> - Runtime performance is "as good as casting", i.e., validation only >> occurs when casting >> >> Here's one idea just for concreteness: Use special type restriction >> annotations, e.g.: >> >> @TypeRestriction(baseType = String.class, enforcer = >> PhoneNumberEnforcer.class) >> public @interface PhoneNumber { >> boolean requireNorthAmerican() default false; >> } >> >> public class PhoneNumberEnforcer implements >> TypeRestrictionEnforcer { >> @Override >> public void validate(PhoneNumber restriction, String value) { >> if (value == null || !value.matches("\+[1-9][2-9][0-9]{6,14}")) >> throw new InvalidPhoneNumberException("not in E.164 format"); >> if (restriction.requireNorthAmerican() && !value.charAt(0) != >> '1') >> throw new InvalidPhoneNumberException("North American number >> required"); >> } >> } >> >> // Some random API... >> public void dial(@PhoneNumber String number) { ... } >> >> // Some random code... >> String input = getUserInput(); >> try { >> dialer.dial((@PhoneNumber String)input); >> } catch (InvalidPhoneNumberException e) { >> System.err.println("Invalid phone number: \"" + input + "\""); >> } >> String input2 = getUserInput(); >> dialer.dial(input2); // "warning: implicit cast to @PhoneNumber >> String" ? >> >> The compiler would insert bytecode or method call-outs at the appropriate >> points to guarantee that any variable with type @PhoneNumber String would >> always contain a value that has successfully survived >> PhoneNumberEnforcer.validate(). In other words, it would provide the >> same level of guarantee as String! does, but it would be checking my >> custom type constraint instead. >> >> The Checker Framework does something like the above, but it has >> limitations due to being a 3rd party tool, it's trying to solve a >> more general problem, and personally I haven't seen it in widespread use >> (is it just me?) >> >> Thoughts? >> >> -Archie >> >> -- >> Archie L. Cobbs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atonita at proton.me Sat Oct 11 05:28:44 2025 From: atonita at proton.me (Aaryn Tonita) Date: Sat, 11 Oct 2025 05:28:44 +0000 Subject: Ad hoc type restriction In-Reply-To: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> Message-ID: > I think the best bet for making this usable would be some mechanism like a "view", likely only on value types, that would erase down to the underlying wrapped type, but interpose yourself on construction, and provided a conversion from T to RefinedT that verified the requirement. But this is both nontrivial and presumes a lot of stuff we don't even have yet.... Bringing the newtype pattern to java would be quite handy. Besides these cases of standing in for refinement types there are also all the cases of favoring composition over inheritance where a mechanism to automatically "dereference" to a delegate would avoid a bunch of boilerplate. -------- Original Message -------- On 10/11/25 00:28, Brian Goetz wrote: > There is a literature on "restriction types" or "refinement types" that describes what you are appealing to. It is most commonly used in functional languages to describe restrictions on construction that otherwise would not affect the representation, such as "positive integer" as a restriction of Integer or "sorted list" as a restriction of List. (This is a good match for functional languages because (a) if you avoid putting a bad value into a variable in the first place, you don't have to worry about that changing, and (b) there is no extension, so once you restrict the value of a variable the functions that operate on the underlying type just work on the refined type.) > > Liquid Haskell (https://en.wikipedia.org/wiki/Liquid_Haskell, https://ucsd-progsys.github.io/liquidhaskell/) is an experimental variant of Haskell that supports refinement types, in aid of verifiable correctness. > > Clojure Spec also can be thought of as a refinement type system, allowing you to use linguistic predicates to overlay optional constraints over an otherwise mostly-dynamically-typed system, such as "the value associated with the map key `age` is a non-negative integer." > > The bad news is that proving type safety when restrictions can contain arbitrary predicative logic is ... hard. Liquid Haskell, and other similar systems, often must appeal to SMT/SAT solvers to type-check a program (which is to say that we should not be surprised to find NP-complete problems arising as part of type checking.) > > Object-oriented languages like Java would likely get to refinement types through wrapping, which has its own set of limitations. If the type you are refining is an interface, you're fine, but if it is a final class (like all value classes!) you obviously have a problem, because you can't interoperate easily between Integer and RefinedInteger (though this problem is likely solveable for values.) > >> But doing that is not ideal because: >> >> - This doesn't work for primitive types (e.g., a person's age, grade point average, number of children, etc.) >> - A wrapper class loses all of the functionality that comes with the wrapped class. For example, I can't say pn1.startsWith(pn2) even though both values are essentially just Strings. >> - There is performance overhead > > (2) is even worse than it sounds: I can't use a WrappedString where a String is expected, even if I manually lifted all the String methods onto it. > >> Thoughts? > > I think the best bet for making this usable would be some mechanism like a "view", likely only on value types, that would erase down to the underlying wrapped type, but interpose yourself on construction, and provided a conversion from T to RefinedT that verified the requirement. But this is both nontrivial and presumes a lot of stuff we don't even have yet.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From redio.development at gmail.com Sat Oct 11 16:17:31 2025 From: redio.development at gmail.com (Red IO) Date: Sat, 11 Oct 2025 18:17:31 +0200 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> Message-ID: This sounds pretty similar in concept to rust smart pointers. More precisely the deref operator. It allows the wrapper object to be treated like a reference to the actual value and as "methods" in rust are called on references to objects it gets the ability to have all methods called on the wrapper object. And treated like a temporary reference to the actual contained value. This works great because of rusts explicit references, values and the lifetime system. Applying this concept to Java is possible but isn't as syntheticly clear like in rust. It requires the notion of a temporary reference to make sense. Something that doesn't exist in java syntacticly. Without this concept it's just an implicit cast. Effectively an implicit getter. Another problem is that different to rust mutability isn't a property an instance and all it's fields share but a property all fields decide separately. Therefore a mutable type cannot be forced to act immutable when viewed through the restricting wrapper types getter. Java expresses no ownership relationship between an object and its fields. For this concept to work and preserve the integrity of the wrappers restrictions it requires either colored methods (mutable, immutable) or defensive copies on access. Both are not concepts that have a universal implementation in Java. As introducing colored functions is out of the question only defensive copies are possible. This requires the concept of a copy mechanisms the compiler can use when accessing the actual value through the restricting wrapper. This could be a cow (copy on write) or a normal field copy. Introducing a formal way to make a copy of classes instance is something that is beneficial anyway and overlaps with some Valhalla concepts. Hope these thoughts help in the understanding of the problems and possible solutions. Great regards RedIODev On Sat, Oct 11, 2025, 07:29 Aaryn Tonita wrote: > > > I think the best bet for making this usable would be some mechanism like > a "view", likely only on value types, that would erase down to the > underlying wrapped type, but interpose yourself on construction, and > provided a conversion from T to RefinedT that verified the requirement. > But this is both nontrivial and presumes a lot of stuff we don't even have > yet.... > > Bringing the newtype pattern to java would be quite handy. Besides these > cases of standing in for refinement types there are also all the cases of > favoring composition over inheritance where a mechanism to automatically > "dereference" to a delegate would avoid a bunch of boilerplate. > > > -------- Original Message -------- > On 10/11/25 00:28, Brian Goetz wrote: > > There is a literature on "restriction types" or "refinement types" that > describes what you are appealing to. It is most commonly used in > functional languages to describe restrictions on construction that > otherwise would not affect the representation, such as "positive integer" > as a restriction of Integer or "sorted list" as a restriction of List. > (This is a good match for functional languages because (a) if you avoid > putting a bad value into a variable in the first place, you don't have to > worry about that changing, and (b) there is no extension, so once you > restrict the value of a variable the functions that operate on the > underlying type just work on the refined type.) > > Liquid Haskell (https://en.wikipedia.org/wiki/Liquid_Haskell, > https://ucsd-progsys.github.io/liquidhaskell/) is an experimental variant > of Haskell that supports refinement types, in aid of verifiable > correctness. > > Clojure Spec also can be thought of as a refinement type system, allowing > you to use linguistic predicates to overlay optional constraints over an > otherwise mostly-dynamically-typed system, such as "the value associated > with the map key `age` is a non-negative integer." > > The bad news is that proving type safety when restrictions can contain > arbitrary predicative logic is ... hard. Liquid Haskell, and other similar > systems, often must appeal to SMT/SAT solvers to type-check a program > (which is to say that we should not be surprised to find NP-complete > problems arising as part of type checking.) > > Object-oriented languages like Java would likely get to refinement types > through wrapping, which has its own set of limitations. If the type you > are refining is an interface, you're fine, but if it is a final class (like > all value classes!) you obviously have a problem, because you can't > interoperate easily between Integer and RefinedInteger (though this problem > is likely solveable for values.) > > But doing that is not ideal because: > > 1. This doesn't work for primitive types (e.g., a person's age, grade > point average, number of children, etc.) > 2. A wrapper class loses all of the functionality that comes with the > wrapped class. For example, I can't say pn1.startsWith(pn2) even > though both values are essentially just Strings. > 3. There is performance overhead > > > (2) is even worse than it sounds: I can't use a WrappedString where a > String is expected, even if I manually lifted all the String methods onto > it. > > Thoughts? > > > I think the best bet for making this usable would be some mechanism like a > "view", likely only on value types, that would erase down to the underlying > wrapped type, but interpose yourself on construction, and provided a > conversion from T to RefinedT that verified the requirement. But this is > both nontrivial and presumes a lot of stuff we don't even have yet.... > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Mon Oct 13 19:17:54 2025 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Mon, 13 Oct 2025 14:17:54 -0500 Subject: Ad hoc type restriction In-Reply-To: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> Message-ID: Ethan McCue wrote: > However there is nothing conceptually preventing the tools validating > @NonNull usage from also emitting an error until you have inserted a known > precheck. > ... > But for other single-value invariants, like your @PhoneNumber example, it > seems fairly practical. Especially since, as a general rule, arbitrary cost > computations really shouldn't be invisible. How would one know if (@B A) is > going to thread invocations of some validation method everywhere? This is why 3rd party tools aren't as good as having the compiler handle it, because the compiler is in a position to provide both stronger and more efficient guarantees - think generic types and runtime erasure. Compiler-supported typing allows the developer to move the burden of proof from the method receiving a parameter to the code invoking that method, and onward back up the call chain, so that validations tend to occur "early", when they are first known to be true, instead of "late" at the (many more) points in the code where someone actually cares that they are true. So if phone numbers are central to your application, and they are passed around and used all over the place as type @PhoneNumber String, then they will only need to actually be validated at a few application entry points, not at the start of every method that has a phone number as a parameter. In other words, the annotation is ideally not a "to-do" list but rather an "it's already done" list. The guarantee that the compiler would then provide is ideally on the same level as with generics: while it's being provided by the compiler, not the JVM, so you can always get around it if you try hard enough (native code, reflection, class file switcheroo, etc.), as long as you "follow the rules" you get the guarantee - or if not, an error or at least a warning. Brian Goetz wrote: > I think the best bet for making this usable would be some mechanism like a > "view", likely only on value types, that would erase down to the underlying > wrapped type, but interpose yourself on construction, and provided a > conversion from T to RefinedT that verified the requirement. But this is > both nontrivial and presumes a lot of stuff we don't even have yet... > I think that is close to what I was imagining. It seems like it could be done with fairly minimal impact/disruption...? No need for wrappers or views. But first just to be clear, what I'm getting at here is a fairly narrow idea, i.e., what relatively simple thing might the compiler do, with a worthwhile cost/benefit ratio, to make it easier for developers to reason about the correctness of their code when "type restriction" is being used, either formally or informally (meaning, if you're using an int to pass around the size of collection, you're doing informal type restriction). What's the benefit? Type restriction is fairly pervasive, and yet because Java doesn't make it very easy to do, it's often not being done at all, and this ends up adding to the amount of manual work developers must do to prove to themselves their code is correct. The more of this burden the compiler could take on, the bigger the benefit would be. What's the cost? That depends on the solution of course. To me the giant poster-child for this kind of pragmatic language addition is generics. It had all kinds of minor flaws from the point of view of language design, but the problem it addressed was so pervasive, and the new tool it provided to developers for verifying the correctness of their code was so powerful, that nobody thinks it wasn't worth the trade-off. OK let me throw out two straw-man proposals. I'll just assume these are stupid/naive ideas with major flaws. Hopefully they can at least help map out the usable territory - if any exists. *Proposal #1* This one is very simple, but provides a weaker guarantee. 1. The compiler recognizes and tracks "type restriction annotations", which are type annotations having the meta-annotation @TypeRestriction 2. For all operations assigning some value v of type S to type T: 1. If a type restriction annotation A is present on T but not S, the compiler generates a warning in the new lint category "type-restriction" That's it. A cast like var pn = (@PhoneNumber String)input functions simply as a developer assertion that the type restriction has been verified, but the compiler does not actually check this. There is no change to the generated bytecode. If the developer chooses to write a validation method that takes a string, validates it (or throws an exception), and then returns the validated string, that method will need to be annotated with @SuppressWarnings("type-restriction") because of the cast in front of the return statement. Guarantee provided: Proper type restriction as long as "type-restriction" warnings are enabled and not emitted. However, this is a "fail slow" guarantee: it's easy to defeat (just cast!). So if you write a method that takes a @PhoneNumber String parameter that is passed an invalid value, you won't find out until something goes wrong later down the line (or never). In other words, *your* code will be correct, but you have to be trusting of any code that *invokes* your code, which in practice is not always a sound strategy. *Proposal #2* This is proposal is more complex but provides a stronger guarantee: 1. The compiler recognizes and tracks "type restriction annotations", which have the meta-annotation @TypeRestriction 1. The annotation specifies a user-supplied "constructor" class providing a user-defined construction/validation method validate(v) 2. We add class TypeRestrictionException extends RuntimeException and encourage validate() methods to throw (some subclass of) it 2. For all operations assigning some value v of type S to type T: 1. If a type restriction annotation A is present on T but not S, the compiler generates a "type-restriction" warning AND adds an implicit cast added (see next step) 3. For every cast like var pn = (@PhoneNumber String)"+15105551212" the compiler inserts bytecode to invoke the appropriate enforcer validate(v) method 4. The JLS rules for method resolution, type inference, etc., do not change (that would be way over-complicating things) 1. Two methods void dial(String pn) and void dial(@PhoneNumber String pn) will still collide Guarantee provided: Proper type restriction unless you are going to extremes (native code, reflection, runtime classfile switcheroo, etc.). This is a "fail fast" guarantee: errors are caught at the moment an invalid value is assigned to a type-restricted variable. If your method parameters have the annotation, you don't have to trust 3rd party code that calls those methods (as long as it was compiled properly). I.e., same level of guarantee as generics. These are by no means complete or particularly elegant solutions from a language design point of view. They are pragmatic and relatively unobtrusive add-ons, using existing language concepts, to get us most of what we want, which is: - User-defined "custom" type restrictions with compile-time checking/enforcement - As with generics, the goal is not language perfection, but rather making it easier for developers to reason about correctness - Compile-time guarantees that type restricted values in source files will be actually type restricted at runtime - Efficient implementation - Validation only happens "when necessary" - No JVM changes needed (erasure) - No changes to language syntax; existing source files are 100% backward compatible The developer side of me says that the cost/benefit ratio of something like this would be worthwhile, in spite of its pragmatic nature, simply because the problem being addressed seems so pervasive. I felt the same way about generics (which was a much bigger change addressing a much bigger pervasive problem). But I'm sure there are things I'm missing... ? -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at upliftinglemma.net Mon Oct 13 21:56:54 2025 From: chris at upliftinglemma.net (Chris Bouchard) Date: Mon, 13 Oct 2025 17:56:54 -0400 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> Message-ID: Archie, I think first-party type restrictions would be great! My first thought was that these annotation-based type restrictions we're discussing feel very similar to constraint annotations in Jakarta Validation, a.k.a. Bean Validation or Hibernate Validation. (I don't think I've seen that mentioned yet in the thread.) I don't think Jakarta Validation fits the use case we're exploring here, but I'm sure there are some lessons to be learned from its design. I have a couple thoughts regarding Proposal #2. On Mon, Oct 13, 2025 at 3:19?PM Archie Cobbs wrote: > > Proposal #2 > > This is proposal is more complex but provides a stronger guarantee: > > ... > > 2. For all operations assigning some value v of type S to type T: > 1. If a type restriction annotation A is present on T but not S, the compiler generates a "type-restriction" warning AND adds an implicit cast added (see next step) I think it's worth calling out that restricted types could be more complex than @PhoneNumber String. For example, we could (presumably) have a restricted type like List<@Directory Map<@Name String, @PhoneNumber String>>, where different annotations are attached to different "nodes" of the generic type. Further, it feels natural that we'd want to use type restriction annotations in class definitions like public class NameMap implements Map<@Name String, V> { ... } so that the type restriction is present in the inherited interface methods.* In that case, we'd need NameMap<@PhoneNumber String> to match Map<@Name String, @PhoneNumber String>. None of this is difficult or new, but it is more complex than just checking top-level type annotations. (* I originally followed this with, "Or else users would have to override every inherited method to add the annotation manually." But that would actually run afoul of variance, right? Assuming we think of @PhoneNumber String as a subtype of String.) > 3. For every cast like var pn = (@PhoneNumber String)"+15105551212" the compiler inserts bytecode to invoke the appropriate enforcer validate(v) method I like that this proposal forces a validation in order to apply the restriction, but I don't think I'm a fan of hanging it off of casting?that feels too magical to me. I get that we want a certain level of magic, because we want this to be painless for the end user, but I think it's a reasonable assumption right now that casting is "almost free." It's sort of the same complaint as with operator overloading. Having casts run library code feels to me like a footgun waiting to happen. Instead, I'd suggest that validators could just be library methods with their own annotation. E.g., public class Validators { @TypeRestrictionValidator(@PhoneNumber) public static String phoneNumber(String value) { if (!isValidPhoneNumber(value)) { throw new TypeRestrictionException("Not a phone number!"); } return value; } } The compiler could notice the @TypeValidator annotation and check that phoneNumber is a valid "type restriction validator method" matching a functional interface like public interface TypeRestrictionValidator { T validate(T value); } When calling a type restriction validator method, the compiler can automatically apply the appropriate cast to the result *without* triggering a warning?every other conversion to a "more restricted" type would warn. (In a sense, @TypeRestrictionValidator(@PhoneNumber) augments the return type from String to @PhoneNumber String.) I suggest that validator methods return T instead of @Restriction T?which would essentially be your Proposal #1?so that the validator method doesn't have to suppress warnings in its own body, and so can still benefit from checking for correct use of *other* restriction types. This might promote a certain amount of composability. E.g., the @PhoneNumber validator method could be public static @NotBlank String phoneNumber(@NotBlank String value) { ... } and it could internally call other methods that depend on that @NotBlank restriction, secure in the knowledge that the compiler is checking those calls. It's also important that the validator's return type be "as restricted" as its input type, so that the compiler can add the new restriction without invalidating the existing ones. One small note: Specifying this with a functional interface would also allow "fluent validation methods" via instance methods, if a library author prefers that to static methods. My suggestion is inspired by the TypeIs[T] type annotation in Python. (And I'm sure there are similar concepts in other languages, but I can't think of them off hand.) Note, though, that Python TypeIs functions return a boolean indicating whether the value is a member of the restricted type or not, which is used to narrow the type in the calling scope. It doesn't seem as natural for Java to narrow based on a conditional, so I suggested returning a value instead. I think it would also be reasonable for the contract to be public interface TypeRestrictionValidator { void validate(T value); // Return void instead of T. } so that the validator method can't replace the value. I do think there's value in allowing the validator to replace the value?e.g., the validator for @PhoneNumber could return a string in a normalized format?but it's not *necessary*. But I also think that would be more frustrating to use, because you'd have to call the validator outside of any method chain or expression. Thanks, Chris Bouchard From ethan at mccue.dev Tue Oct 14 00:30:09 2025 From: ethan at mccue.dev (Ethan McCue) Date: Mon, 13 Oct 2025 20:30:09 -0400 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> Message-ID: Let me frame this a different way. The compiler already provides an extension mechanism in the form of annotation processors. We already have examples out there in the world of annotation processors that augment the compiler with extra checks, including those that treat an annotated type as a distinct type and perform flow-analysis (https://checkerframework.org/manual/#writing-annotations). The primary restriction on this mechanism is that a processor cannot alter if a given compilation unit is valid Java or how that compilation unit will translate to bytecode. The bar for a new compiler extension mechanism would reasonably begin at "the existing mechanism is insufficient." As I see it, your proposal #1 is already viable. Proposal #2 is not without lifting (in part or in whole) the primary restriction on annotation processors, but if those checks are not synthetic - i.e. a warning is provided until an explicit check is inserted into the source code - that sidesteps the need for it. You could also perform bytecode rewriting after program compilation to insert checks. That is allowed. > because Java doesn't make it very easy to do, it's often not being done at all Maybe let's interrogate this from the other side: Why is this not easy to do? What friction points are essential (you do need to get a processor and put it on paths - what else?) and which are artificial? Clearly it's harder than a "normal library," why is that? On Mon, Oct 13, 2025 at 4:28?PM Archie Cobbs wrote: > Ethan McCue wrote: > >> However there is nothing conceptually preventing the tools validating >> @NonNull usage from also emitting an error until you have inserted a known >> precheck. >> ... >> But for other single-value invariants, like your @PhoneNumber example, it >> seems fairly practical. Especially since, as a general rule, arbitrary cost >> computations really shouldn't be invisible. How would one know if (@B A) is >> going to thread invocations of some validation method everywhere? > > > This is why 3rd party tools aren't as good as having the compiler handle > it, because the compiler is in a position to provide both stronger and more > efficient guarantees - think generic types and runtime erasure. > Compiler-supported typing allows the developer to move the burden of proof > from the method receiving a parameter to the code invoking that method, and > onward back up the call chain, so that validations tend to occur "early", > when they are first known to be true, instead of "late" at the (many more) > points in the code where someone actually cares that they are true. > > So if phone numbers are central to your application, and they are passed > around and used all over the place as type @PhoneNumber String, then they > will only need to actually be validated at a few application entry points, > not at the start of every method that has a phone number as a parameter. In > other words, the annotation is ideally not a "to-do" list but rather an > "it's already done" list. > > The guarantee that the compiler would then provide is ideally on the same > level as with generics: while it's being provided by the compiler, not the > JVM, so you can always get around it if you try hard enough (native code, > reflection, class file switcheroo, etc.), as long as you "follow the rules" > you get the guarantee - or if not, an error or at least a warning. > > Brian Goetz wrote: > >> I think the best bet for making this usable would be some mechanism like >> a "view", likely only on value types, that would erase down to the >> underlying wrapped type, but interpose yourself on construction, and >> provided a conversion from T to RefinedT that verified the requirement. >> But this is both nontrivial and presumes a lot of stuff we don't even have >> yet... >> > > I think that is close to what I was imagining. It seems like it could be > done with fairly minimal impact/disruption...? No need for wrappers or > views. > > But first just to be clear, what I'm getting at here is a fairly narrow > idea, i.e., what relatively simple thing might the compiler do, with a > worthwhile cost/benefit ratio, to make it easier for developers to reason > about the correctness of their code when "type restriction" is being used, > either formally or informally (meaning, if you're using an int to pass > around the size of collection, you're doing informal type restriction). > > What's the benefit? Type restriction is fairly pervasive, and yet because > Java doesn't make it very easy to do, it's often not being done at all, and > this ends up adding to the amount of manual work developers must do to > prove to themselves their code is correct. The more of this burden the > compiler could take on, the bigger the benefit would be. > > What's the cost? That depends on the solution of course. > > To me the giant poster-child for this kind of pragmatic language addition > is generics. It had all kinds of minor flaws from the point of view of > language design, but the problem it addressed was so pervasive, and the new > tool it provided to developers for verifying the correctness of their code > was so powerful, that nobody thinks it wasn't worth the trade-off. > > OK let me throw out two straw-man proposals. I'll just assume these are > stupid/naive ideas with major flaws. Hopefully they can at least help map > out the usable territory - if any exists. > > *Proposal #1* > > This one is very simple, but provides a weaker guarantee. > > 1. The compiler recognizes and tracks "type restriction annotations", > which are type annotations having the meta-annotation @TypeRestriction > 2. For all operations assigning some value v of type S to type T: > 1. If a type restriction annotation A is present on T but not S, > the compiler generates a warning in the new lint category > "type-restriction" > > That's it. A cast like var pn = (@PhoneNumber String)input functions > simply as a developer assertion that the type restriction has been > verified, but the compiler does not actually check this. There is no change > to the generated bytecode. If the developer chooses to write a validation > method that takes a string, validates it (or throws an exception), and then > returns the validated string, that method will need to be annotated with > @SuppressWarnings("type-restriction") because of the cast in front of the > return statement. > > Guarantee provided: Proper type restriction as long as "type-restriction" > warnings are enabled and not emitted. However, this is a "fail slow" > guarantee: it's easy to defeat (just cast!). So if you write a method that > takes a @PhoneNumber String parameter that is passed an invalid value, > you won't find out until something goes wrong later down the line (or > never). In other words, *your* code will be correct, but you have to be > trusting of any code that *invokes* your code, which in practice is not > always a sound strategy. > > *Proposal #2* > > This is proposal is more complex but provides a stronger guarantee: > > 1. The compiler recognizes and tracks "type restriction annotations", > which have the meta-annotation @TypeRestriction > 1. The annotation specifies a user-supplied "constructor" class > providing a user-defined construction/validation method validate(v) > 2. We add class TypeRestrictionException extends RuntimeException and > encourage validate() methods to throw (some subclass of) it > 2. For all operations assigning some value v of type S to type T: > 1. If a type restriction annotation A is present on T but not S, > the compiler generates a "type-restriction" warning AND adds an > implicit cast added (see next step) > 3. For every cast like var pn = (@PhoneNumber String)"+15105551212" the > compiler inserts bytecode to invoke the appropriate enforcer > validate(v) method > 4. The JLS rules for method resolution, type inference, etc., do not > change (that would be way over-complicating things) > 1. Two methods void dial(String pn) and void dial(@PhoneNumber > String pn) will still collide > > Guarantee provided: Proper type restriction unless you are going to > extremes (native code, reflection, runtime classfile switcheroo, etc.). > This is a "fail fast" guarantee: errors are caught at the moment an invalid > value is assigned to a type-restricted variable. If your method parameters > have the annotation, you don't have to trust 3rd party code that calls > those methods (as long as it was compiled properly). I.e., same level of > guarantee as generics. > > These are by no means complete or particularly elegant solutions from a > language design point of view. They are pragmatic and relatively > unobtrusive add-ons, using existing language concepts, to get us most of > what we want, which is: > > - User-defined "custom" type restrictions with compile-time > checking/enforcement > - As with generics, the goal is not language perfection, but rather > making it easier for developers to reason about correctness > - Compile-time guarantees that type restricted values in source files > will be actually type restricted at runtime > - Efficient implementation > - Validation only happens "when necessary" > - No JVM changes needed (erasure) > - No changes to language syntax; existing source files are 100% > backward compatible > > The developer side of me says that the cost/benefit ratio of something > like this would be worthwhile, in spite of its pragmatic nature, simply > because the problem being addressed seems so pervasive. I felt the same way > about generics (which was a much bigger change addressing a > much bigger pervasive problem). > > But I'm sure there are things I'm missing... ? > > -Archie > > -- > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Tue Oct 14 00:49:50 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 13 Oct 2025 20:49:50 -0400 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> Message-ID: <88e65ed4-d891-4e64-a4fb-cb4b35ee3d6c@oracle.com> I just want to point out that there's a sort of "syntax error" in your proposal. Java provides annotations as a means of "structured comments" on declarations and type uses, but the Java language does not, and will not, impart any semantics to programs on the basis of annotations.? If you are talking about writing a static analysis tool, perhaps a pluggable checker in the Checkers framework, then (as Ethan points out) you can use existing annotations with APs and do so, and the compiler is merely a conduit for ferrying the annotations to where an AP can find them.? (In fact, there is already a checker for "fake enums", where you say that a given `int` is really one of the enumerated set 1, 2, 3, 4, which is a restriction type.) If you mean that the compiler actually is going to get into the act, though, then this is not an annotation-driven feature, this is a full-blown language feature and should be thought of accordingly.? I know its tempting to view annos as a "shortcut" to language features, but if something has semantics, its part of the language, and sadly that means no shortcuts.? That's not to say it isn't a worthwhile idea with a good cost-to-benefit ratio. (Indeed, as we get further into the type classes work, the logic of a `newtype` mechanism becomes even more compelling as then it becomes possible to affect behavior with restrictions such as `CaseInsensitveString`, which doesn't actually restrict the value set of the type, but allows you to define `Ord CaseInsentiveString` separate from `Ord String`.) On 10/13/2025 3:17 PM, Archie Cobbs wrote: > Ethan McCue wrote: > > However there is nothing conceptually preventing the tools > validating @NonNull usage from also emitting an error until you > have inserted a known precheck. > ... > But for other single-value invariants, like your @PhoneNumber > example, it seems fairly practical. Especially since, as a general > rule, arbitrary cost computations really shouldn't be invisible. > How would one know if (@B A) is going to thread invocations of > some validation method everywhere? > > > This is why 3rd party tools aren't as good as having the compiler > handle it, because the compiler is in a position to provide both > stronger and more efficient?guarantees - think generic types and > runtime erasure. Compiler-supported typing allows the developer > to?move?the burden of proof from the method receiving a parameter to > the code invoking that method,?and onward back up the call chain, so > that validations tend to occur "early", when they are first known to > be true, instead of "late" at the (many more) points in the code where > someone actually cares that they are true. > > So if phone numbers are central to your application, and they are > passed around and used all over the place as type at PhoneNumber String, > then they will only need to actually be validated at a few application > entry points, not at the start of every method that has a phone number > as a parameter. In other words, the annotation is ideally not a > "to-do"?list but rather an "it's already done" list. > > The guarantee that the compiler would then provide is ideally on the > same level as with generics: while it's being provided by the > compiler, not the JVM, so you can always get around it if you try hard > enough (native code, reflection, class file switcheroo, etc.), as long > as you "follow the rules" you get the guarantee - or if not, an error > or at least a warning. > > Brian Goetz wrote: > > I think the best bet for making this usable would be some > mechanism like a "view", likely only on value types, that would > erase down to the underlying wrapped type, but interpose yourself > on construction, and provided a conversion from T to RefinedT that > verified the requirement.? But this is both nontrivial and > presumes a lot of stuff we don't even have yet... > > > I think that is close to what I was imagining. It seems like it could > be done with fairly minimal impact/disruption...? No need for wrappers > or views. > > But first just to be clear, what I'm getting at here is a fairly > narrow idea, i.e., what relatively simple thing might the compiler do, > with a worthwhile cost/benefit ratio, to make it easier for developers > to reason about the correctness of their code when "type restriction" > is being used, either formally or informally (meaning, if you're using > an int to pass around the size of collection, you're doing informal > type restriction). > > What's the benefit? Type restriction is fairly pervasive, and yet > because Java doesn't make it very easy to do, it's often not being > done at all, and this ends up adding to the amount of manual work > developers must do to prove to themselves their code is correct. The > more of this burden the compiler could take on, the bigger the benefit > would be. > > What's the cost? That depends on the solution of course. > > To me the giant poster-child for this kind of pragmatic language > addition is generics. It had all kinds of minor flaws from the point > of view of language design, but the problem it addressed was so > pervasive, and the new tool it provided to developers for verifying > the correctness of their code was so powerful, that nobody thinks it > wasn't worth the trade-off. > > OK let me throw out two straw-man proposals. I'll just assume these > are stupid/naive ideas with major flaws. Hopefully they can at least > help map out the usable territory - if any exists. > > *Proposal #1* > > This one is very simple, but provides a weaker guarantee. > > 1. The compiler recognizes and tracks?"type restriction annotations", > which are type annotations having themeta-annotation @TypeRestriction > 2. For all operations assigning some value v of type S to type T: > 1. If a type restriction annotation A is present on T but not S, > the compiler generates a warning in the new lint category > "type-restriction" > > That's it. A cast likevar pn = (@PhoneNumber String)input functions > simply as a developer assertion that the type restriction has been > verified, but the compiler does not actually check this. There is no > change to the generated bytecode. If the developer chooses to write a > validation method that takes a string, validates it (or throws an > exception), and then returns the validated string, that method will > need to be annotated with at SuppressWarnings("type-restriction") because > of the cast in front of the return statement. > > Guarantee provided: Proper type restriction as long as > "type-restriction" warnings are enabled and not emitted. However, this > is a "fail slow" guarantee: it's easy to defeat (just cast!). So if > you write a method that takes a at PhoneNumber String parameter that is > passed an invalid value, you won't find out until something goes wrong > later down the line (or never). In other words, /your/ code will be > correct, but you have to be trusting of any code that /invokes/ your > code, which in practice is not always a sound strategy. > > *Proposal #2* > > This is proposal is more complex but provides a stronger guarantee: > > 1. The compiler recognizes and tracks?"type restriction annotations", > which have themeta-annotation @TypeRestriction > 1. The annotation specifies a user-supplied "constructor" class > providing a user-defined construction/validation method > validate(v) > 2. We add class TypeRestrictionException extends > RuntimeException?and encourage validate() methods to throw > (some subclass of) it > 2. For all operations assigning some value v of type S to type T: > 1. If a type restriction annotation A is present on T but not S, > the compiler generates a "type-restriction"?warning AND adds > an implicit cast added (see next step) > 3. For every cast like var pn = (@PhoneNumber String)"+15105551212" > ?the compiler inserts bytecode to invoke the appropriate > enforcervalidate(v) method > 4. The JLS rules for method resolution, type inference, etc., do not > change (that would be way over-complicating things) > 1. Two methodsvoid dial(String pn) andvoid dial(@PhoneNumber > String pn) will still collide > > Guarantee provided: Proper type restriction unless you are going to > extremes (native code, reflection, runtime classfile switcheroo, > etc.). This is a "fail fast" guarantee: errors are caught at the > moment an invalid value is assigned to a type-restricted variable. If > your method parameters have the annotation, you don't have to trust > 3rd party code that calls those methods (as long as it was compiled > properly). I.e., same level of guarantee as generics. > > These are by no means complete or particularly?elegant solutions from > a language design point of view. They are pragmatic and relatively > unobtrusive add-ons, using existing language concepts, to get us most > of what we want, which is: > > * User-defined "custom" type restrictions with compile-time > checking/enforcement > o As with generics, the goal is not language perfection, but > rather making it easier for developers to reason about correctness > * Compile-time guarantees that type restricted values in source > files will be actually type restricted at runtime > * Efficient implementation > o Validation only happens "when necessary" > o No JVM changes needed (erasure) > * No changes to language syntax; existing source files are 100% > backward compatible > > The developer side of me says that the cost/benefit ratio of something > like this would be worthwhile, in spite of its pragmatic nature, > simply because the problem being addressed seems so pervasive. I felt > the same way about generics (which was a much bigger change addressing > a much?bigger?pervasive problem). > > But I'm sure there are things I'm missing... ? > > -Archie > > -- > Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarma.swaranga at gmail.com Tue Oct 14 01:17:05 2025 From: sarma.swaranga at gmail.com (Swaranga Sarma) Date: Mon, 13 Oct 2025 18:17:05 -0700 Subject: Ad hoc type restriction In-Reply-To: <88e65ed4-d891-4e64-a4fb-cb4b35ee3d6c@oracle.com> References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> <88e65ed4-d891-4e64-a4fb-cb4b35ee3d6c@oracle.com> Message-ID: Yes, this would need to be a proper language feature. Some time back in a Reddit wishlist for Java features I had posted this as a "type aliases with validation" feature. Something like a new language construct to declare a new type with an accompanying validation lambda: // definition site type-alias CustomerId::String where { if (!this.matches("CUST-[0-9]{4}")) { throw new IllegalArgumentException("CustomerId must match pattern CUST-XXXX where X is a digit."); } } // use site void handleCustomer(CustomerId id) { // id guaranteed to match pattern CUST-XXXX // can call all String methods } CustomerId id = "CUST-1234"; // valid; validation executed at assignment time String invalidId = "INVALID"; String validId = "CUST-1234"; handleCustomer(id); // validation executed at runtime exactly once handleCustomer(validId); // validation executed at runtime exactly once handleCustomer(invalidId); // throws IllegalArgumentException at runtime I only wrote it to demonstrate what I was talking about; it wasn't a suggestion on what it should look like or how the validation that would happen at different sites be surfaced to the user. So please disregard the syntax. What I really wanted to say was that this would be a game-changer for so many of my projects. Many times, I am tempted to take the shortcut of not creating the wrapper domain types to avoid the allocation but also to avoid having to lift some of the methods from the wrapped type to the wrapper. Valhalla may address the allocation concern though but something like this would be amazing. I will go read about type classes as I know nothing about them. Regards Swaranga On Mon, Oct 13, 2025 at 5:50?PM Brian Goetz wrote: > I just want to point out that there's a sort of "syntax error" in your > proposal. > > Java provides annotations as a means of "structured comments" on > declarations and type uses, but the Java language does not, and will not, > impart any semantics to programs on the basis of annotations. If you are > talking about writing a static analysis tool, perhaps a pluggable checker > in the Checkers framework, then (as Ethan points out) you can use existing > annotations with APs and do so, and the compiler is merely a conduit for > ferrying the annotations to where an AP can find them. (In fact, there is > already a checker for "fake enums", where you say that a given `int` is > really one of the enumerated set 1, 2, 3, 4, which is a restriction type.) > > If you mean that the compiler actually is going to get into the act, > though, then this is not an annotation-driven feature, this is a full-blown > language feature and should be thought of accordingly. I know its tempting > to view annos as a "shortcut" to language features, but if something has > semantics, its part of the language, and sadly that means no shortcuts. > That's not to say it isn't a worthwhile idea with a good cost-to-benefit > ratio. (Indeed, as we get further into the type classes work, the logic of > a `newtype` mechanism becomes even more compelling as then it becomes > possible to affect behavior with restrictions such as > `CaseInsensitveString`, which doesn't actually restrict the value set of > the type, but allows you to define `Ord CaseInsentiveString` separate from > `Ord String`.) > > > > On 10/13/2025 3:17 PM, Archie Cobbs wrote: > > Ethan McCue wrote: > >> However there is nothing conceptually preventing the tools validating >> @NonNull usage from also emitting an error until you have inserted a known >> precheck. >> ... >> But for other single-value invariants, like your @PhoneNumber example, it >> seems fairly practical. Especially since, as a general rule, arbitrary cost >> computations really shouldn't be invisible. How would one know if (@B A) is >> going to thread invocations of some validation method everywhere? > > > This is why 3rd party tools aren't as good as having the compiler handle > it, because the compiler is in a position to provide both stronger and more > efficient guarantees - think generic types and runtime erasure. > Compiler-supported typing allows the developer to move the burden of proof > from the method receiving a parameter to the code invoking that method, and > onward back up the call chain, so that validations tend to occur "early", > when they are first known to be true, instead of "late" at the (many more) > points in the code where someone actually cares that they are true. > > So if phone numbers are central to your application, and they are passed > around and used all over the place as type @PhoneNumber String, then they > will only need to actually be validated at a few application entry points, > not at the start of every method that has a phone number as a parameter. In > other words, the annotation is ideally not a "to-do" list but rather an > "it's already done" list. > > The guarantee that the compiler would then provide is ideally on the same > level as with generics: while it's being provided by the compiler, not the > JVM, so you can always get around it if you try hard enough (native code, > reflection, class file switcheroo, etc.), as long as you "follow the rules" > you get the guarantee - or if not, an error or at least a warning. > > Brian Goetz wrote: > >> I think the best bet for making this usable would be some mechanism like >> a "view", likely only on value types, that would erase down to the >> underlying wrapped type, but interpose yourself on construction, and >> provided a conversion from T to RefinedT that verified the requirement. >> But this is both nontrivial and presumes a lot of stuff we don't even have >> yet... >> > > I think that is close to what I was imagining. It seems like it could be > done with fairly minimal impact/disruption...? No need for wrappers or > views. > > But first just to be clear, what I'm getting at here is a fairly narrow > idea, i.e., what relatively simple thing might the compiler do, with a > worthwhile cost/benefit ratio, to make it easier for developers to reason > about the correctness of their code when "type restriction" is being used, > either formally or informally (meaning, if you're using an int to pass > around the size of collection, you're doing informal type restriction). > > What's the benefit? Type restriction is fairly pervasive, and yet because > Java doesn't make it very easy to do, it's often not being done at all, and > this ends up adding to the amount of manual work developers must do to > prove to themselves their code is correct. The more of this burden the > compiler could take on, the bigger the benefit would be. > > What's the cost? That depends on the solution of course. > > To me the giant poster-child for this kind of pragmatic language addition > is generics. It had all kinds of minor flaws from the point of view of > language design, but the problem it addressed was so pervasive, and the new > tool it provided to developers for verifying the correctness of their code > was so powerful, that nobody thinks it wasn't worth the trade-off. > > OK let me throw out two straw-man proposals. I'll just assume these are > stupid/naive ideas with major flaws. Hopefully they can at least help map > out the usable territory - if any exists. > > *Proposal #1* > > This one is very simple, but provides a weaker guarantee. > > 1. The compiler recognizes and tracks "type restriction annotations", > which are type annotations having the meta-annotation @TypeRestriction > 2. For all operations assigning some value v of type S to type T: > 1. If a type restriction annotation A is present on T but not S, > the compiler generates a warning in the new lint category > "type-restriction" > > That's it. A cast like var pn = (@PhoneNumber String)input functions > simply as a developer assertion that the type restriction has been > verified, but the compiler does not actually check this. There is no change > to the generated bytecode. If the developer chooses to write a validation > method that takes a string, validates it (or throws an exception), and then > returns the validated string, that method will need to be annotated with > @SuppressWarnings("type-restriction") because of the cast in front of the > return statement. > > Guarantee provided: Proper type restriction as long as "type-restriction" > warnings are enabled and not emitted. However, this is a "fail slow" > guarantee: it's easy to defeat (just cast!). So if you write a method that > takes a @PhoneNumber String parameter that is passed an invalid value, > you won't find out until something goes wrong later down the line (or > never). In other words, *your* code will be correct, but you have to be > trusting of any code that *invokes* your code, which in practice is not > always a sound strategy. > > *Proposal #2* > > This is proposal is more complex but provides a stronger guarantee: > > 1. The compiler recognizes and tracks "type restriction annotations", > which have the meta-annotation @TypeRestriction > 1. The annotation specifies a user-supplied "constructor" class > providing a user-defined construction/validation method validate(v) > 2. We add class TypeRestrictionException extends RuntimeException and > encourage validate() methods to throw (some subclass of) it > 2. For all operations assigning some value v of type S to type T: > 1. If a type restriction annotation A is present on T but not S, > the compiler generates a "type-restriction" warning AND adds an > implicit cast added (see next step) > 3. For every cast like var pn = (@PhoneNumber String)"+15105551212" the > compiler inserts bytecode to invoke the appropriate enforcer > validate(v) method > 4. The JLS rules for method resolution, type inference, etc., do not > change (that would be way over-complicating things) > 1. Two methods void dial(String pn) and void dial(@PhoneNumber > String pn) will still collide > > Guarantee provided: Proper type restriction unless you are going to > extremes (native code, reflection, runtime classfile switcheroo, etc.). > This is a "fail fast" guarantee: errors are caught at the moment an invalid > value is assigned to a type-restricted variable. If your method parameters > have the annotation, you don't have to trust 3rd party code that calls > those methods (as long as it was compiled properly). I.e., same level of > guarantee as generics. > > These are by no means complete or particularly elegant solutions from a > language design point of view. They are pragmatic and relatively > unobtrusive add-ons, using existing language concepts, to get us most of > what we want, which is: > > - User-defined "custom" type restrictions with compile-time > checking/enforcement > - As with generics, the goal is not language perfection, but rather > making it easier for developers to reason about correctness > - Compile-time guarantees that type restricted values in source files > will be actually type restricted at runtime > - Efficient implementation > - Validation only happens "when necessary" > - No JVM changes needed (erasure) > - No changes to language syntax; existing source files are 100% > backward compatible > > The developer side of me says that the cost/benefit ratio of something > like this would be worthwhile, in spite of its pragmatic nature, simply > because the problem being addressed seems so pervasive. I felt the same way > about generics (which was a much bigger change addressing a > much bigger pervasive problem). > > But I'm sure there are things I'm missing... ? > > -Archie > > -- > Archie L. Cobbs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at livemedia.com.au Tue Oct 14 02:03:50 2025 From: david at livemedia.com.au (David Ryan) Date: Tue, 14 Oct 2025 13:03:50 +1100 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> <88e65ed4-d891-4e64-a4fb-cb4b35ee3d6c@oracle.com> Message-ID: Just to add another angle to this discussion, this also has application in data schemas (e.g. JSON Schema, XML Schema) where you want to validate input to conform to range restrictions. I have been investigating what syntax might look like in the context of schemas. I've been looking at both JSON Schema and XML Schema to see how they handle value constraints. Schemas parameratise constraints rather than allowing functions. One idea I came up with for syntax [1] was the following; more just as an investigation rather than what I'd expect in a final schema design: person: !object ? { additionalProperties: true } { name: !string ? { minLength: 4, maxLength: 30 }, age: !integer ? { minimum: 0, maximum: 120 } } As a side note, that idea was in the last of ten articles where I've written about 30k words on JSON and data formats and schemas [2]. The audience for the articles probably overlaps with people in this mailing list; so I'd be interested in feedback. . I'm slowly working towards the idea of some interesting schema and data format design based on some serialization fundamentals. I'm keeping an eye out for work by Viktor Klang and just saw there's a new Devoxx update [3], so I'll watch that to see if there's any interesting overlap. Is there another mailing list where discussion on serialization has been happening? Regards, David. [1] https://litterat.substack.com/p/a-deep-dive-into-json-part-10-constraints [2] https://litterat.substack.com/p/a-deep-dive-into-json-part-1-introduction [3] https://www.youtube.com/watch?v=guF2NvgJIN8 On Tue, 14 Oct 2025 at 12:18, Swaranga Sarma wrote: > Yes, this would need to be a proper language feature. Some time back in a > Reddit wishlist for Java features I had posted this as a "type aliases with > validation" feature. Something like a new language construct to declare a > new type with an accompanying validation lambda: > // definition site > type-alias CustomerId::String where { > if (!this.matches("CUST-[0-9]{4}")) { > throw new IllegalArgumentException("CustomerId must match pattern > CUST-XXXX where X is a digit."); > } > } > > // use site > void handleCustomer(CustomerId id) { > // id guaranteed to match pattern CUST-XXXX > // can call all String methods > } > > CustomerId id = "CUST-1234"; // valid; validation executed at assignment > time > String invalidId = "INVALID"; > String validId = "CUST-1234"; > > handleCustomer(id); // validation executed at runtime exactly once > handleCustomer(validId); // validation executed at runtime exactly once > handleCustomer(invalidId); // throws IllegalArgumentException at runtime > > I only wrote it to demonstrate what I was talking about; it wasn't a > suggestion on what it should look like or how the validation that > would happen at different sites be surfaced to the user. So please > disregard the syntax. What I really wanted to say was that this would be a > game-changer for so many of my projects. Many times, I am tempted to take > the shortcut of not creating the wrapper domain types to avoid the > allocation but also to avoid having to lift some of the methods from the > wrapped type to the wrapper. Valhalla may address the allocation concern > though but something like this would be amazing. > > I will go read about type classes as I know nothing about them. > > Regards > Swaranga > > > On Mon, Oct 13, 2025 at 5:50?PM Brian Goetz > wrote: > >> I just want to point out that there's a sort of "syntax error" in your >> proposal. >> >> Java provides annotations as a means of "structured comments" on >> declarations and type uses, but the Java language does not, and will not, >> impart any semantics to programs on the basis of annotations. If you are >> talking about writing a static analysis tool, perhaps a pluggable checker >> in the Checkers framework, then (as Ethan points out) you can use existing >> annotations with APs and do so, and the compiler is merely a conduit for >> ferrying the annotations to where an AP can find them. (In fact, there is >> already a checker for "fake enums", where you say that a given `int` is >> really one of the enumerated set 1, 2, 3, 4, which is a restriction type.) >> >> If you mean that the compiler actually is going to get into the act, >> though, then this is not an annotation-driven feature, this is a full-blown >> language feature and should be thought of accordingly. I know its tempting >> to view annos as a "shortcut" to language features, but if something has >> semantics, its part of the language, and sadly that means no shortcuts. >> That's not to say it isn't a worthwhile idea with a good cost-to-benefit >> ratio. (Indeed, as we get further into the type classes work, the logic of >> a `newtype` mechanism becomes even more compelling as then it becomes >> possible to affect behavior with restrictions such as >> `CaseInsensitveString`, which doesn't actually restrict the value set of >> the type, but allows you to define `Ord CaseInsentiveString` separate from >> `Ord String`.) >> >> >> >> On 10/13/2025 3:17 PM, Archie Cobbs wrote: >> >> Ethan McCue wrote: >> >>> However there is nothing conceptually preventing the tools validating >>> @NonNull usage from also emitting an error until you have inserted a known >>> precheck. >>> ... >>> But for other single-value invariants, like your @PhoneNumber example, >>> it seems fairly practical. Especially since, as a general rule, arbitrary >>> cost computations really shouldn't be invisible. How would one know if (@B >>> A) is going to thread invocations of some validation method everywhere? >> >> >> This is why 3rd party tools aren't as good as having the compiler handle >> it, because the compiler is in a position to provide both stronger and more >> efficient guarantees - think generic types and runtime erasure. >> Compiler-supported typing allows the developer to move the burden of proof >> from the method receiving a parameter to the code invoking that method, and >> onward back up the call chain, so that validations tend to occur "early", >> when they are first known to be true, instead of "late" at the (many more) >> points in the code where someone actually cares that they are true. >> >> So if phone numbers are central to your application, and they are passed >> around and used all over the place as type @PhoneNumber String, then >> they will only need to actually be validated at a few application entry >> points, not at the start of every method that has a phone number as a >> parameter. In other words, the annotation is ideally not a "to-do" list but >> rather an "it's already done" list. >> >> The guarantee that the compiler would then provide is ideally on the same >> level as with generics: while it's being provided by the compiler, not the >> JVM, so you can always get around it if you try hard enough (native code, >> reflection, class file switcheroo, etc.), as long as you "follow the rules" >> you get the guarantee - or if not, an error or at least a warning. >> >> Brian Goetz wrote: >> >>> I think the best bet for making this usable would be some mechanism like >>> a "view", likely only on value types, that would erase down to the >>> underlying wrapped type, but interpose yourself on construction, and >>> provided a conversion from T to RefinedT that verified the requirement. >>> But this is both nontrivial and presumes a lot of stuff we don't even have >>> yet... >>> >> >> I think that is close to what I was imagining. It seems like it could be >> done with fairly minimal impact/disruption...? No need for wrappers or >> views. >> >> But first just to be clear, what I'm getting at here is a fairly narrow >> idea, i.e., what relatively simple thing might the compiler do, with a >> worthwhile cost/benefit ratio, to make it easier for developers to reason >> about the correctness of their code when "type restriction" is being used, >> either formally or informally (meaning, if you're using an int to pass >> around the size of collection, you're doing informal type restriction). >> >> What's the benefit? Type restriction is fairly pervasive, and yet because >> Java doesn't make it very easy to do, it's often not being done at all, and >> this ends up adding to the amount of manual work developers must do to >> prove to themselves their code is correct. The more of this burden the >> compiler could take on, the bigger the benefit would be. >> >> What's the cost? That depends on the solution of course. >> >> To me the giant poster-child for this kind of pragmatic language addition >> is generics. It had all kinds of minor flaws from the point of view of >> language design, but the problem it addressed was so pervasive, and the new >> tool it provided to developers for verifying the correctness of their code >> was so powerful, that nobody thinks it wasn't worth the trade-off. >> >> OK let me throw out two straw-man proposals. I'll just assume these are >> stupid/naive ideas with major flaws. Hopefully they can at least help map >> out the usable territory - if any exists. >> >> *Proposal #1* >> >> This one is very simple, but provides a weaker guarantee. >> >> 1. The compiler recognizes and tracks "type restriction annotations", >> which are type annotations having the meta-annotation @TypeRestriction >> 2. For all operations assigning some value v of type S to type T: >> 1. If a type restriction annotation A is present on T but not S, >> the compiler generates a warning in the new lint category >> "type-restriction" >> >> That's it. A cast like var pn = (@PhoneNumber String)input functions >> simply as a developer assertion that the type restriction has been >> verified, but the compiler does not actually check this. There is no change >> to the generated bytecode. If the developer chooses to write a validation >> method that takes a string, validates it (or throws an exception), and then >> returns the validated string, that method will need to be annotated with >> @SuppressWarnings("type-restriction") because of the cast in front of >> the return statement. >> >> Guarantee provided: Proper type restriction as long as "type-restriction" >> warnings are enabled and not emitted. However, this is a "fail slow" >> guarantee: it's easy to defeat (just cast!). So if you write a method that >> takes a @PhoneNumber String parameter that is passed an invalid value, >> you won't find out until something goes wrong later down the line (or >> never). In other words, *your* code will be correct, but you have to be >> trusting of any code that *invokes* your code, which in practice is not >> always a sound strategy. >> >> *Proposal #2* >> >> This is proposal is more complex but provides a stronger guarantee: >> >> 1. The compiler recognizes and tracks "type restriction annotations", >> which have the meta-annotation @TypeRestriction >> 1. The annotation specifies a user-supplied "constructor" class >> providing a user-defined construction/validation method validate(v) >> 2. We add class TypeRestrictionException extends RuntimeException and >> encourage validate() methods to throw (some subclass of) it >> 2. For all operations assigning some value v of type S to type T: >> 1. If a type restriction annotation A is present on T but not S, >> the compiler generates a "type-restriction" warning AND adds an >> implicit cast added (see next step) >> 3. For every cast like var pn = (@PhoneNumber String)"+15105551212" the >> compiler inserts bytecode to invoke the appropriate enforcer >> validate(v) method >> 4. The JLS rules for method resolution, type inference, etc., do not >> change (that would be way over-complicating things) >> 1. Two methods void dial(String pn) and void dial(@PhoneNumber >> String pn) will still collide >> >> Guarantee provided: Proper type restriction unless you are going to >> extremes (native code, reflection, runtime classfile switcheroo, etc.). >> This is a "fail fast" guarantee: errors are caught at the moment an invalid >> value is assigned to a type-restricted variable. If your method parameters >> have the annotation, you don't have to trust 3rd party code that calls >> those methods (as long as it was compiled properly). I.e., same level of >> guarantee as generics. >> >> These are by no means complete or particularly elegant solutions from a >> language design point of view. They are pragmatic and relatively >> unobtrusive add-ons, using existing language concepts, to get us most of >> what we want, which is: >> >> - User-defined "custom" type restrictions with compile-time >> checking/enforcement >> - As with generics, the goal is not language perfection, but >> rather making it easier for developers to reason about correctness >> - Compile-time guarantees that type restricted values in source files >> will be actually type restricted at runtime >> - Efficient implementation >> - Validation only happens "when necessary" >> - No JVM changes needed (erasure) >> - No changes to language syntax; existing source files are 100% >> backward compatible >> >> The developer side of me says that the cost/benefit ratio of something >> like this would be worthwhile, in spite of its pragmatic nature, simply >> because the problem being addressed seems so pervasive. I felt the same way >> about generics (which was a much bigger change addressing a >> much bigger pervasive problem). >> >> But I'm sure there are things I'm missing... ? >> >> -Archie >> >> -- >> Archie L. Cobbs >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Tue Oct 14 16:07:25 2025 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Tue, 14 Oct 2025 11:07:25 -0500 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> Message-ID: Thanks for the comments - this is an interesting discussion. There are lots of angles so I'll try to focus on one or two at a time. Ethan McCue wrote: > As I see it, your proposal #1 is already viable Can you elaborate on how that would work? > Maybe let's interrogate this from the other side: Why is this not easy to > do? What friction points are essential (you do need to get a processor and > put it on paths - what else?) and which are artificial? Clearly it's harder > than a "normal library," why is that? Here's a very simple, concrete example of what I want. To be honest, I'm not sure whether this is just "difficult" or "impossible" today, but in some sense that doesn't matter - either way, it's too hard! WHAT I HAVE: Zillions of lines of code that look roughly like this (continuing the previous @PhoneNumber example): public void dial(String pn) { Preconditions.checkArgument(pn != null && pn.matches(PhoneNumber.PATTERN)); ... // do whatever } WHAT I WANT: To be able to instead say this: public void dial(@PhoneNumber String number) { ... // do whatever } AND have the following be true: - At compile time... - I get a warning or error if any code tries to invoke dial() with a "plain" String parameter, or assign a plain String to a @PhoneNumber String - There is some well-defined, compiler-sanctioned way to validate a phone number, using custom logic I define, so I can assign it to a @PhoneNumber String without said error/warning. Even if it involves @SuppressWarnings, I'll take it. - At runtime... - No explicit check of the number parameter is performed by the dial() method (efficiency) - The dial() method is guaranteed (modulo sneaky tricks) that number is always a valid phone number Obviously you can replace @PhoneNumber with any other assertion. For example: public void editProfile(@LoggedIn User user) { ... } Is the above possible using the checker framework? I couldn't figure out how, though that may be due to my own lack of ability. But even if it is possible via checker framework or otherwise, I don't see this being done in any widespread fashion, which seems like pretty strong evidence that it's too hard. Brian Goetz wrote: > I just want to point out that there's a sort of "syntax error" in your > proposal....Java provides annotations as a means of "structured comments" > on declarations and type uses, but the Java language does not, and will > not, impart any semantics to programs on the basis of annotations. Right - it's like Java gives us sticky notes, but they are limited to just being "developer notes", and the compiler support for plugging in custom automation using those notes (annotation processors) is too limited to implement the above idea. So (side note) in theory this whole idea could be redirected toward expanding compiler "plug-in" support instead of "annotation hacking". But I think it's OK for certain "sticky notes" to be understood by the compiler, and have the compiler offer corresponding assistance in verifying them (which it is already doing - see below). I also agree that having annotations affect the generated bytecode ("runtime semantics") is a big step beyond that, but maybe that's not necessary in this case. If you mean that the compiler actually is going to get into the act, > though, then this is not an annotation-driven feature, this is a full-blown > language feature and should be thought of accordingly. I know its tempting > to view annos as a "shortcut" to language features, but if something has > semantics, its part of the language, and sadly that means no shortcuts. I agree in principle. Unfortunately that sounds like something that would take a long time to happen. Moreover, there is at least one counter-example to your claim: @Override. That annotation is not just a "developer sticky note" - it triggers a compile-time error if what it asserts is not true - precisely what we want here. Just curious... What was the thinking that allowed @Override to be an exception to the rule, and why would that thinking no longer apply? And after all, who doesn't love @Override ? :) -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Tue Oct 14 16:32:21 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Tue, 14 Oct 2025 12:32:21 -0400 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> Message-ID: <36edf5e8-791e-4d31-8e7d-05cea3eabd22@oracle.com> > WHAT I WANT: To be able to instead say this: > > ? ? public void dial(@PhoneNumber String number) { > ? ? ? ? ... // do whatever > ? ? } > > AND have the following be true: > > * At compile time... > o I get a warning or error if any code tries to invoke > dial()?with a "plain" String parameter, or assign a plain > String to a @PhoneNumber String > o There is some well-defined, compiler-sanctioned way to > validate a phone number, using custom logic I define, so I can > assign it to a?@PhoneNumber String without said error/warning. > Even if it involves @SuppressWarnings, I'll take it. > * At runtime... > o No explicit check of thenumber parameter is performed by the > dial() method (efficiency) > o Thedial() method is guaranteed (modulo sneaky tricks) > that?number is always a valid phone number > > Obviously you can replace?@PhoneNumber with any other assertion. For > example:public void editProfile(@LoggedIn User user) { ... } > > Is the above possible using the checker framework? I couldn't figure > out how, though that may be due to my own lack of ability. Yes, but you get no implicit conversion from String to @PhoneNumber String -- you have to call a method to explicitly do the conversion: ? ? @PhoneNumber String validatePhoneNumber(String s) { ... do the thing ... } This is just a function from String -> @PN String, which just happens to preserve its input after validating it (or throws if validation fails.) A custom checker can validate that you never assign to, pass, return, or cast a non-PN String when a PN String is expected, and generate diagnostics accordingly (warnings or errors, as you like.) > But even if it is possible via checker framework or otherwise, I don't > see this being done in any widespread fashion, which seems like pretty > strong evidence that it's too hard. It's not that hard, but it _is_ hard to get people to adopt this stuff.? Very few anno-driven type system extensions have gained any sort of adoption, even if they are useful and sound.? (And interestingly, a corpus search found that the vast majority of those that are used have to do with nullity management.) Why don't these things get adopted?? ?Well, friction is definitely a part of it.??You have to set up a custom toolchain configuration. You have to do some work to satisfy the stricter type system, which is often fussy and annoying, especially if you are trying to add it to existing code.? You have to program in a dialect, often one that is underspecified.? ?Libraries you use won't know that dialect, so at every boundary between your code and library code that might result in a new PhoneNumber being exchanged, you have to introduce some extra code or assertion at the boundary.? And to many developers, this sounds like a lot of extra work to get marginally increased confidence. There is similar data to observe in less invasive static analysis, too.? When people first encounter a good static analysis tool, they get really excited, it finds a bunch of bugs fairly quickly, and they want to build it into their methodology.? But somewhere along the line, it falls away.? Part of it is the friction (you have to run it in your CI, and on each developer workstation, with the same configuration), and part of it is diminishing returns.? But most developers don't feel like they are getting enough for the effort. Of course, the more we can decrease the friction, the lower the payback has to be to make it worthwhile. > But I think it's OK for certain "sticky notes" to be understood by the > compiler, and have the compiler offer corresponding assistance in > verifying them (which it is already doing?- see below). I also agree > that having annotations affect the generated bytecode ("runtime > semantics") is a big step beyond that, but maybe that's not necessary > in this case. There are a few "sticky notes" that the "compiler" does in fact understand, such as @Override or @FunctionalInterface.? (I put "compiler" in quotes because the compiler doesn't get to have an opinion about anything semantic; that's the language spec's job.) But these have a deliberately limited, narrow role: they capture scrutable structural assertions that require (per language spec!) the compiler to statically reject some programs that don't conform to the assertions, but they never have any lingusitic semantics for correct programs.? ?That is, for a correct program P with annotations, stripping all annotations out of P MUST produce a semantically equivalent program.? (The next question in this dialog (which I've only had a few zillion times) is "what about frameworks that use reflection to drive semantics."? But that one kind of answers itself when you think about it, so I'll just skip ahead now.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotanolexandr842 at gmail.com Wed Oct 15 00:49:11 2025 From: rotanolexandr842 at gmail.com (Olexandr Rotan) Date: Wed, 15 Oct 2025 03:49:11 +0300 Subject: Ad hoc type restriction In-Reply-To: <36edf5e8-791e-4d31-8e7d-05cea3eabd22@oracle.com> References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> <36edf5e8-791e-4d31-8e7d-05cea3eabd22@oracle.com> Message-ID: > > The next question in this dialog (which I've only had a few zillion times) > is "what about frameworks that use reflection to drive semantics." But > that one kind of answers itself when you think about it, so I'll just skip > ahead now.) Just out of curiosity, what was the motivation behind the annotations with runtime retention if they are not expected to be scanned for by frameworks? Even if talking about things like aspect-oriented programming, if advice does not alter the behaviour of the invocation, it will most likely be designed to produce some side-effect, which is also a semantics change On Tue, Oct 14, 2025 at 7:32?PM Brian Goetz wrote: > WHAT I WANT: To be able to instead say this: > > public void dial(@PhoneNumber String number) { > ... // do whatever > } > > AND have the following be true: > > - At compile time... > - I get a warning or error if any code tries to invoke dial() with > a "plain" String parameter, or assign a plain String to a @PhoneNumber > String > - There is some well-defined, compiler-sanctioned way to validate a > phone number, using custom logic I define, so I can assign it to a @PhoneNumber > String without said error/warning. Even if it involves > @SuppressWarnings, I'll take it. > - At runtime... > - No explicit check of the number parameter is performed by the > dial() method (efficiency) > - The dial() method is guaranteed (modulo sneaky tricks) that number > is always a valid phone number > > Obviously you can replace @PhoneNumber with any other assertion. For > example: public void editProfile(@LoggedIn User user) { ... } > > Is the above possible using the checker framework? I couldn't figure out > how, though that may be due to my own lack of ability. > > > Yes, but you get no implicit conversion from String to @PhoneNumber String > -- you have to call a method to explicitly do the conversion: > > @PhoneNumber String validatePhoneNumber(String s) { ... do the thing > ... } > > This is just a function from String -> @PN String, which just happens to > preserve its input after validating it (or throws if validation fails.) > > A custom checker can validate that you never assign to, pass, return, or > cast a non-PN String when a PN String is expected, and generate diagnostics > accordingly (warnings or errors, as you like.) > > But even if it is possible via checker framework or otherwise, I don't see > this being done in any widespread fashion, which seems like pretty strong > evidence that it's too hard. > > > It's not that hard, but it _is_ hard to get people to adopt this stuff. > Very few anno-driven type system extensions have gained any sort of > adoption, even if they are useful and sound. (And interestingly, a corpus > search found that the vast majority of those that are used have to do with > nullity management.) > > Why don't these things get adopted? Well, friction is definitely a part > of it. You have to set up a custom toolchain configuration. You have to > do some work to satisfy the stricter type system, which is often fussy and > annoying, especially if you are trying to add it to existing code. You > have to program in a dialect, often one that is underspecified. Libraries > you use won't know that dialect, so at every boundary between your code and > library code that might result in a new PhoneNumber being exchanged, you > have to introduce some extra code or assertion at the boundary. And to > many developers, this sounds like a lot of extra work to get marginally > increased confidence. > > There is similar data to observe in less invasive static analysis, too. > When people first encounter a good static analysis tool, they get really > excited, it finds a bunch of bugs fairly quickly, and they want to build it > into their methodology. But somewhere along the line, it falls away. Part > of it is the friction (you have to run it in your CI, and on each developer > workstation, with the same configuration), and part of it is diminishing > returns. But most developers don't feel like they are getting enough for > the effort. > > Of course, the more we can decrease the friction, the lower the payback > has to be to make it worthwhile. > > But I think it's OK for certain "sticky notes" to be understood by the > compiler, and have the compiler offer corresponding assistance in verifying > them (which it is already doing - see below). I also agree that having > annotations affect the generated bytecode ("runtime semantics") is a big > step beyond that, but maybe that's not necessary in this case. > > > There are a few "sticky notes" that the "compiler" does in fact > understand, such as @Override or @FunctionalInterface. (I put "compiler" > in quotes because the compiler doesn't get to have an opinion about > anything semantic; that's the language spec's job.) But these have a > deliberately limited, narrow role: they capture scrutable structural > assertions that require (per language spec!) the compiler to statically > reject some programs that don't conform to the assertions, but they never > have any lingusitic semantics for correct programs. That is, for a > correct program P with annotations, stripping all annotations out of P MUST > produce a semantically equivalent program. (The next question in this > dialog (which I've only had a few zillion times) is "what about frameworks > that use reflection to drive semantics." But that one kind of answers > itself when you think about it, so I'll just skip ahead now.) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Wed Oct 15 01:09:46 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Tue, 14 Oct 2025 21:09:46 -0400 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> <36edf5e8-791e-4d31-8e7d-05cea3eabd22@oracle.com> Message-ID: There's nothing wrong with annotations being scanned by frameworks.? Indeed, the entire point of annotations is to allow code authors to decorate source declarations with "structured comments" in a way that is scrutable, both statically and dynamically, to frameworks and tooling*.? What annotations are _not_ for is to impart semantics _at the Java language level_.? Annotation plumbing is a service the language and compiler perform for the benefit of libraries and frameworks, but it recuses itself from being a beneficiary of that service. (*At this point someone will typically pipe in "But the compiler is a tool.? Ha!? I am very clever."? But remember, the Java compiler has no discretion whatsoever about program semantics. That discretion belongs purely to the language specification.) On 10/14/2025 8:49 PM, Olexandr Rotan wrote: > > The next question in this dialog (which I've only had a few > zillion times) is "what about frameworks that use reflection to > drive semantics."? But that one kind of answers itself when you > think about it, so I'll just skip ahead now.) > > > Just out of curiosity, what was the motivation behind the annotations > with runtime retention if they are not expected to be scanned for by > frameworks? Even if talking about things like aspect-oriented > programming, if advice does not alter the behaviour of the invocation, > it will most likely be designed to produce some side-effect, which is > also a semantics change > > On Tue, Oct 14, 2025 at 7:32?PM Brian Goetz > wrote: > >> WHAT I WANT: To be able to instead say this: >> >> ? ? public void dial(@PhoneNumber String number) { >> ? ? ? ? ... // do whatever >> ? ? } >> >> AND have the following be true: >> >> * At compile time... >> o I get a warning or error if any code tries to invoke >> dial()?with a "plain" String parameter, or assign a plain >> String to a @PhoneNumber String >> o There is some well-defined, compiler-sanctioned way to >> validate a phone number, using custom logic I define, so >> I can assign it to a?@PhoneNumber String without said >> error/warning. Even if it involves @SuppressWarnings, >> I'll take it. >> * At runtime... >> o No explicit check of thenumber parameter is performed by >> the dial() method (efficiency) >> o Thedial() method is guaranteed (modulo sneaky tricks) >> that?number is always a valid phone number >> >> Obviously you can replace?@PhoneNumber with any other assertion. >> For example:public void editProfile(@LoggedIn User user) { ... } >> >> Is the above possible using the checker framework? I couldn't >> figure out how, though that may be due to my own lack of ability. > > Yes, but you get no implicit conversion from String to > @PhoneNumber String -- you have to call a method to explicitly do > the conversion: > > ? ? @PhoneNumber String validatePhoneNumber(String s) { ... do the > thing ... } > > This is just a function from String -> @PN String, which just > happens to preserve its input after validating it (or throws if > validation fails.) > > A custom checker can validate that you never assign to, pass, > return, or cast a non-PN String when a PN String is expected, and > generate diagnostics accordingly (warnings or errors, as you like.) > >> But even if it is possible via checker framework or otherwise, I >> don't see this being done in any widespread fashion, which seems >> like pretty strong evidence that it's too hard. > > It's not that hard, but it _is_ hard to get people to adopt this > stuff.? Very few anno-driven type system extensions have gained > any sort of adoption, even if they are useful and sound.? (And > interestingly, a corpus search found that the vast majority of > those that are used have to do with nullity management.) > > Why don't these things get adopted?? ?Well, friction is definitely > a part of it.??You have to set up a custom toolchain > configuration.? You have to do some work to satisfy the stricter > type system, which is often fussy and annoying, especially if you > are trying to add it to existing code.? You have to program in a > dialect, often one that is underspecified.? ?Libraries you use > won't know that dialect, so at every boundary between your code > and library code that might result in a new PhoneNumber being > exchanged, you have to introduce some extra code or assertion at > the boundary. And to many developers, this sounds like a lot of > extra work to get marginally increased confidence. > > There is similar data to observe in less invasive static analysis, > too.? When people first encounter a good static analysis tool, > they get really excited, it finds a bunch of bugs fairly quickly, > and they want to build it into their methodology.? But somewhere > along the line, it falls away. Part of it is the friction (you > have to run it in your CI, and on each developer workstation, with > the same configuration), and part of it is diminishing returns.? > But most developers don't feel like they are getting enough for > the effort. > > Of course, the more we can decrease the friction, the lower the > payback has to be to make it worthwhile. > >> But I think it's OK for certain "sticky notes" to be understood >> by the compiler, and have the compiler offer corresponding >> assistance in verifying them (which it is already doing?- see >> below). I also agree that having annotations affect the generated >> bytecode ("runtime semantics") is a big step beyond that, but >> maybe that's not necessary in this case. > > There are a few "sticky notes" that the "compiler" does in fact > understand, such as @Override or @FunctionalInterface. (I put > "compiler" in quotes because the compiler doesn't get to have an > opinion about anything semantic; that's the language spec's job.)? > But these have a deliberately limited, narrow role: they capture > scrutable structural assertions that require (per language spec!) > the compiler to statically reject some programs that don't conform > to the assertions, but they never have any lingusitic semantics > for correct programs.? ?That is, for a correct program P with > annotations, stripping all annotations out of P MUST produce a > semantically equivalent program.? (The next question in this > dialog (which I've only had a few zillion times) is "what about > frameworks that use reflection to drive semantics."? But that one > kind of answers itself when you think about it, so I'll just skip > ahead now.) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scolebourne at joda.org Wed Oct 15 06:34:44 2025 From: scolebourne at joda.org (Stephen Colebourne) Date: Wed, 15 Oct 2025 07:34:44 +0100 Subject: Primitive type patterns - an alternative approach (JEP 507) Message-ID: In the vein of JEP feedback, I believe it makes sense to support primitive types in pattern matching, and will make sense to support value types in the future. And I can see the great work that has been done so far to enable this. Unfortunately, I hate the proposed syntactic approach in JEP 507. It wasn't really clear to me as to *why* I hated the syntax until I had enough time to really think through what Java does in the area of primitive type casts, and why extending that as-is to pattern matching would IMO be a huge mistake. (Please note that I fully grasp the pedagogical approach wrt instanceof defending an unsafe cast, but no matter how much it is repeated, I don't buy it, and I don't believe it is good enough by itself.) To capture my thoughts, I've written up how Java's current approach to casts leads me to an alternative proposal - type conversion casts, and type conversion patterns: https://tinyurl.com/typeconvertjava1 thanks Stephen From davidalayachew at gmail.com Wed Oct 15 11:25:15 2025 From: davidalayachew at gmail.com (David Alayachew) Date: Wed, 15 Oct 2025 07:25:15 -0400 Subject: Primitive type patterns - an alternative approach (JEP 507) In-Reply-To: References: Message-ID: Hello Stephen, I already gave my thoughts on your Reddit post [1], so I'll avoid duplicating them here. All I'll add is that you should CC Valhalla Dev as well. Not only are you explicitly naming value types as part of your reasoning, but the recent Valhalla videos that came out seem to mention (in passing) some of the things you are concerned about. Thank you for your time. David Alayachew [1] = https://old.reddit.com/r/java/comments/1o747zu/type_conversion_in_java_an_alternative_proposal/njlech1/ On Wed, Oct 15, 2025, 2:35?AM Stephen Colebourne wrote: > In the vein of JEP feedback, I believe it makes sense to support > primitive types in pattern matching, and will make sense to support > value types in the future. And I can see the great work that has been > done so far to enable this. > > Unfortunately, I hate the proposed syntactic approach in JEP 507. It > wasn't really clear to me as to *why* I hated the syntax until I had > enough time to really think through what Java does in the area of > primitive type casts, and why extending that as-is to pattern matching > would IMO be a huge mistake. > > (Please note that I fully grasp the pedagogical approach wrt > instanceof defending an unsafe cast, but no matter how much it is > repeated, I don't buy it, and I don't believe it is good enough by > itself.) > > To capture my thoughts, I've written up how Java's current approach to > casts leads me to an alternative proposal - type conversion casts, and > type conversion patterns: > https://tinyurl.com/typeconvertjava1 > > thanks > Stephen > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pedro.lamarao at prodist.com.br Wed Oct 15 13:55:45 2025 From: pedro.lamarao at prodist.com.br (=?UTF-8?Q?Pedro_Lamar=C3=A3o?=) Date: Wed, 15 Oct 2025 10:55:45 -0300 Subject: Primitive type patterns - an alternative approach (JEP 507) In-Reply-To: References: Message-ID: Hello Stephen! In your essay, you write: "A number held as a long is very different to a number held as an int, even if the number has the same magnitude." The only difference I can think of about the behaviour is around overflow or underflow -- because, say, INT_MAX + 1 would give a different result than ((long) INT_MAX) + 1. What other differences am I missing? I'm unsure how these arguments stand when we go all the way down to the world where a number held as an int in Java is in fact held in a machine register which is long-wide. Atte. Pedro. Em qua., 15 de out. de 2025 ?s 03:35, Stephen Colebourne < scolebourne at joda.org> escreveu: > In the vein of JEP feedback, I believe it makes sense to support > primitive types in pattern matching, and will make sense to support > value types in the future. And I can see the great work that has been > done so far to enable this. > > Unfortunately, I hate the proposed syntactic approach in JEP 507. It > wasn't really clear to me as to *why* I hated the syntax until I had > enough time to really think through what Java does in the area of > primitive type casts, and why extending that as-is to pattern matching > would IMO be a huge mistake. > > (Please note that I fully grasp the pedagogical approach wrt > instanceof defending an unsafe cast, but no matter how much it is > repeated, I don't buy it, and I don't believe it is good enough by > itself.) > > To capture my thoughts, I've written up how Java's current approach to > casts leads me to an alternative proposal - type conversion casts, and > type conversion patterns: > https://tinyurl.com/typeconvertjava1 > > thanks > Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Wed Oct 15 15:39:11 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Wed, 15 Oct 2025 11:39:11 -0400 Subject: Primitive type patterns - an alternative approach (JEP 507) In-Reply-To: References: Message-ID: I have some guesses about why you are still so upset by this feature.? Since you have raised the suspense level through indirection, I'll play along by sharing my guesses before reading. My guess about what is making you uncomfortable is that (in addition to the obvious: you are still having trouble accepting that instanceof will be taking on a larger role than "subtype test") is that the meaning of a pattern (e.g., `case Foo f`) is determined _relative to the static type of some match candidate, specified elsewhere_, such as the selector expression of a switch (for a top level pattern) or the component type of a record (for a nested pattern.) Further, I'll guess that your proposal involves making conversion more explicit by adding new syntax, either (a) distinguishing a total pattern match and a partial one, or (b) distinguishing a pattern match involving subtyping and one involving conversion. (If I had to bet further, I would take (b)) Let's see how I did ... pretty close!? You wanted to go _even more explicit_ than (b) -- by explicitly naming both types (even though the compiler already has them in hand.) Zooming out, design almost always involves "lump vs split" choices; do we highlight the specific differences between cases, or their commonality?? Java does a lot of lumping; for example, we use the `==` operator to compare both references and primitives, but `==` on ints means something different than on floats or object references.? (But also, it doesn't mean something different; it asks the same question: "are these two things identical.")? The choice to lump or split in any given situation is of course situational, but Java tends to err on the side of lumping, and my observation is that whenever Stephen comes with a proposal, it is usually one that involves "more splitting."? (Not a criticism; it's valid philosophical viewpoint.) In this case, you've observed that Java already does a lot of lumping with conversions; a cast from A to B is (a) determined by the combination of types A and B, and (b) its meaning can change drastically depending on these types.? (This isn't new; this is Java 1.0 stuff, which got lumped further in Java 5 when autoboxing was added.) (Pause for a brief public service announcement: if you are even reading this far, you should go read JLS Chapter 5 at least once. More so than any other feature, this is a case where, if you listen carefully to the language, ) For those who didn't go and read JLS 5, here's the set of conversions that are permitted in a casting context: ? ? ? an identity conversion (?5.1.1) ? a widening primitive conversion (?5.1.2) ? a narrowing primitive conversion (?5.1.3) ? a widening and narrowing primitive conversion (?5.1.4) ? a boxing conversion (?5.1.7) ? a boxing conversion followed by a widening reference conversion (?5.1.5) ? a widening reference conversion (?5.1.5) ? a widening reference conversion followed by an unboxing conversion ? a widening reference conversion followed by an unboxing conversion, then followed by a widening primitive conversion ? a narrowing reference conversion (?5.1.6) ? a narrowing reference conversion followed by an unboxing conversion ? an unboxing conversion (?5.1.8) ? an unboxing conversion followed by a widening primitive conversion That's a big list!? When we see "cast A to B", we must look at the types A and B, and decide if any of these conversions are applicable (it's not obvious, but (effectively) given a combination of A and B, at most one will be).? There's a lot going on here -- casting is pretty lumpy!? (The fact that casting can mean one of 15 different things -- and most people didn't even notice until now, shows that lumping is often a good idea; it hides small distinctions in favor of general concepts.) At root, what I think is making you uncomfortable is that: ? ? int x = (int) anObject and ? ? int x = (int) aLong use the same syntax, but (feel like they) mean different things. (In other words, that casting (and all the other conversion contexts) is lumpy.)? And while this might not have bothered you too much before, the fact that patterns can be _composed_ makes for more cases where the meaning of something is determined by the types involved, but those types are not right there in your face like they are in the lines above.? When you see ? ? case Box(String s): this might be an exhaustive pattern on Box (if the component of Box is a String) or might be a partial pattern (if the component is, say, Object.)? The concept is not new, but the more expressive syntax -- the one that lets you compose two patterns -- means that running down the types involve requires some more work (here, you have to look at the declaration of Box.) But, this is nothing new in Java!? This happens with overloading: ? ? m(x) could select different overloads of `m` based on the type of `x`. And it happens with chaining: ? ? x.foo() ? ? ?.bar() where we don't even know where to look for the `bar` method until we know the return type of `foo()`. At root, this proposal is bargaining with the worry that "but users won't know what the code does if the types aren't right there in their face."? And while I get that fear, we don't always optimize language design for "the most important thing is making sure no is confused, ever"; that would probably result in a language no one wants to program in.? There's a reason we lump. (To take a silly-extreme version, would you want to need different addition operators for `int` and `long`?? What would the table of operators in such a language look like?) So, I get why you want to make these things explicit.? But that's going in the opposite direction the language is going.? Valhalla is doing what it can to heal this rift, not double down on it; the set of "conversions" won't be baked into a table in JLS 5.5. Let's take one example about where this is going: ? ? record IntBox(int x) { } ? ? switch (box) { ? ? ? ? case IntBox(0) -> "zero"; ? ? ? ? ... ? ? } The zero here is a "constant pattern".? But what does it meant to match the constant zero?? We might think this is pretty complicated, given the current state of conversions.? We already have special pleading in JLS 5 for converting `int` constants to the smaller integral types (byte, short, char), because no one wanted to live in a world where we had sigils for every primitive type, such as `0b` and `0s` -- too splitty.? We like being able to use integer literals and let the compiler figure out that `int 0` and `byte 0` are the same thing; that's a mostly-satisfying lumping. Now, let's say that our box contains not an int, but a Complex128, a new numeric type.? We are probably not going to add complex literals (e.g., 1 + 2i) to the language; adding new literal forms for every new type won't scale.? But we will be able to let the author of Complex128 say that `int` can be widened exactly to `Complex128` (note: this is not the same thing as letting Complex128 define an "implicit conversion" to int; it is more constrained).? And "is this complex value zero" is a reasonable question in this domain, so we might well want to ask: ? ? switch (complexBox) { ? ? ? ? case ComplexBox(0) -> "zero"; ? ? ? ? ... ? ? } What does this mean?? Well, the framework for primitive patterns gives us a trivially simple semantics for such constant patterns, even when the constant is of a different type than the value being tested; the above pattern match is equivalent to: ? ? case ComplexBox(int x) when x == 0 -> "zero"; That's it; once we give a clear meaning to "can this complex be cleanly cast to int", the meaning of constant patterns falls out for free. Now, I offer this example not as "so therefore all of JEP 507 is justified", but as a window into the fact that JEP 507 doesn't exist in a design vacuum -- it is related to numerous other directions the language is going in (here, constant patterns, and user-definable numeric types that are convertible with existing numeric types.)? Some JEPs are the end of their own story, whereas others are more about leveling the ground for future progress. On 10/15/2025 2:34 AM, Stephen Colebourne wrote: > In the vein of JEP feedback, I believe it makes sense to support > primitive types in pattern matching, and will make sense to support > value types in the future. And I can see the great work that has been > done so far to enable this. > > Unfortunately, I hate the proposed syntactic approach in JEP 507. It > wasn't really clear to me as to *why* I hated the syntax until I had > enough time to really think through what Java does in the area of > primitive type casts, and why extending that as-is to pattern matching > would IMO be a huge mistake. > > (Please note that I fully grasp the pedagogical approach wrt > instanceof defending an unsafe cast, but no matter how much it is > repeated, I don't buy it, and I don't believe it is good enough by > itself.) > > To capture my thoughts, I've written up how Java's current approach to > casts leads me to an alternative proposal - type conversion casts, and > type conversion patterns: > https://tinyurl.com/typeconvertjava1 > > thanks > Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From manu at sridharan.net Wed Oct 15 16:34:33 2025 From: manu at sridharan.net (Manu Sridharan) Date: Wed, 15 Oct 2025 09:34:33 -0700 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> <36edf5e8-791e-4d31-8e7d-05cea3eabd22@oracle.com> Message-ID: Hi all, regarding what the Checker Framework and similar tools can do: WHAT I WANT: To be able to instead say this: > > public void dial(@PhoneNumber String number) { > ... // do whatever > } > > AND have the following be true: > > At compile time... > >> I get a warning or error if any code tries to invoke dial() with a >> "plain" String parameter, or assign a plain String to a @PhoneNumber >> String >> There is some well-defined, compiler-sanctioned way to validate a phone >> number, using custom logic I define, so I can assign it to a @PhoneNumber >> String without said error/warning. Even if it involves @SuppressWarnings, >> I'll take it. >> > At runtime... > >> No explicit check of the number parameter is performed by the dial() method >> (efficiency) >> The dial() method is guaranteed (modulo sneaky tricks) that number is always >> a valid phone number >> > Obviously you can replace @PhoneNumber with any other assertion. For > example: public void editProfile(@LoggedIn User user) { ... } > > Is the above possible using the checker framework? I couldn't figure out > how, though that may be due to my own lack of ability. > This is all mostly possible via the Checker Framework and similar approaches. You wouldn?t need @SuppressWarnings annotations for validation either, due to type refinement . And, for this type of property, where you?re essentially trying to introduce new subtypes of an existing type and then enforce type compatibility at assignments, the implementation effort to write the checker is pretty low. And features like generics would also be handled out of the box. In terms of the ?guarantee? that dial always gets a valid phone number, that requires that all code executed at runtime was checked by the checker. If dial might get invoked by unchecked code (e.g., if it?s part of the public API of some library), then some kind of runtime checks are probably still needed inside dial. (I?m assuming that reflection, dynamic creation of classes, etc. count as ?sneaky tricks? but those are also potentially problematic.) In terms of the challenges of running the Checker Framework / barriers to adoption: adding the Checker Framework to your build is not too hard; it?s just another annotation processor plus the corresponding flags to enable / configure the checks you want. As implemented the Checker Framework can introduce a significant build-time overhead (potentially 5X or greater), which may be too much to incur on every compile. This is not fundamental; NullAway takes a similar approach and we measured the overhead to be around 10%. But reducing overhead of the Checker Framework itself may require significant work. One can run the Checker Framework in a CI job separate from the normal build, but this delays feedback to the developer. Honestly I think the biggest barrier to adoption is writing the necessary annotations to get the checker to initially pass; for an extant large code base this can be significant work, depending on the property. There is research on inferring these annotations for existing code bases (paper 1 , paper 2 ). Best, Manu On Oct 14, 2025 at 18:09:46, Brian Goetz wrote: > There's nothing wrong with annotations being scanned by frameworks. > Indeed, the entire point of annotations is to allow code authors to > decorate source declarations with "structured comments" in a way that is > scrutable, both statically and dynamically, to frameworks and tooling*. > What annotations are _not_ for is to impart semantics _at the Java language > level_. Annotation plumbing is a service the language and compiler perform > for the benefit of libraries and frameworks, but it recuses itself from > being a beneficiary of that service. > > > (*At this point someone will typically pipe in "But the compiler is a > tool. Ha! I am very clever." But remember, the Java compiler has no > discretion whatsoever about program semantics. That discretion belongs > purely to the language specification.) > > On 10/14/2025 8:49 PM, Olexandr Rotan wrote: > > The next question in this dialog (which I've only had a few zillion times) >> is "what about frameworks that use reflection to drive semantics." But >> that one kind of answers itself when you think about it, so I'll just skip >> ahead now.) > > > Just out of curiosity, what was the motivation behind the annotations with > runtime retention if they are not expected to be scanned for by frameworks? > Even if talking about things like aspect-oriented programming, if advice > does not alter the behaviour of the invocation, it will most likely be > designed to produce some side-effect, which is also a semantics change > > On Tue, Oct 14, 2025 at 7:32?PM Brian Goetz > wrote: > >> WHAT I WANT: To be able to instead say this: >> >> public void dial(@PhoneNumber String number) { >> ... // do whatever >> } >> >> AND have the following be true: >> >> - At compile time... >> - I get a warning or error if any code tries to invoke dial() with >> a "plain" String parameter, or assign a plain String to a @PhoneNumber >> String >> - There is some well-defined, compiler-sanctioned way to validate >> a phone number, using custom logic I define, so I can assign it to a @PhoneNumber >> String without said error/warning. Even if it involves >> @SuppressWarnings, I'll take it. >> - At runtime... >> - No explicit check of the number parameter is performed by the >> dial() method (efficiency) >> - The dial() method is guaranteed (modulo sneaky tricks) that number >> is always a valid phone number >> >> Obviously you can replace @PhoneNumber with any other assertion. For >> example: public void editProfile(@LoggedIn User user) { ... } >> >> Is the above possible using the checker framework? I couldn't figure out >> how, though that may be due to my own lack of ability. >> >> >> Yes, but you get no implicit conversion from String to @PhoneNumber >> String -- you have to call a method to explicitly do the conversion: >> >> @PhoneNumber String validatePhoneNumber(String s) { ... do the thing >> ... } >> >> This is just a function from String -> @PN String, which just happens to >> preserve its input after validating it (or throws if validation fails.) >> >> A custom checker can validate that you never assign to, pass, return, or >> cast a non-PN String when a PN String is expected, and generate diagnostics >> accordingly (warnings or errors, as you like.) >> >> But even if it is possible via checker framework or otherwise, I don't >> see this being done in any widespread fashion, which seems like pretty >> strong evidence that it's too hard. >> >> >> It's not that hard, but it _is_ hard to get people to adopt this stuff. >> Very few anno-driven type system extensions have gained any sort of >> adoption, even if they are useful and sound. (And interestingly, a corpus >> search found that the vast majority of those that are used have to do with >> nullity management.) >> >> Why don't these things get adopted? Well, friction is definitely a part >> of it. You have to set up a custom toolchain configuration. You have to >> do some work to satisfy the stricter type system, which is often fussy and >> annoying, especially if you are trying to add it to existing code. You >> have to program in a dialect, often one that is underspecified. Libraries >> you use won't know that dialect, so at every boundary between your code and >> library code that might result in a new PhoneNumber being exchanged, you >> have to introduce some extra code or assertion at the boundary. And to >> many developers, this sounds like a lot of extra work to get marginally >> increased confidence. >> >> There is similar data to observe in less invasive static analysis, too. >> When people first encounter a good static analysis tool, they get really >> excited, it finds a bunch of bugs fairly quickly, and they want to build it >> into their methodology. But somewhere along the line, it falls away. Part >> of it is the friction (you have to run it in your CI, and on each developer >> workstation, with the same configuration), and part of it is diminishing >> returns. But most developers don't feel like they are getting enough for >> the effort. >> >> Of course, the more we can decrease the friction, the lower the payback >> has to be to make it worthwhile. >> >> But I think it's OK for certain "sticky notes" to be understood by the >> compiler, and have the compiler offer corresponding assistance in verifying >> them (which it is already doing - see below). I also agree that having >> annotations affect the generated bytecode ("runtime semantics") is a big >> step beyond that, but maybe that's not necessary in this case. >> >> >> There are a few "sticky notes" that the "compiler" does in fact >> understand, such as @Override or @FunctionalInterface. (I put "compiler" >> in quotes because the compiler doesn't get to have an opinion about >> anything semantic; that's the language spec's job.) But these have a >> deliberately limited, narrow role: they capture scrutable structural >> assertions that require (per language spec!) the compiler to statically >> reject some programs that don't conform to the assertions, but they never >> have any lingusitic semantics for correct programs. That is, for a >> correct program P with annotations, stripping all annotations out of P MUST >> produce a semantically equivalent program. (The next question in this >> dialog (which I've only had a few zillion times) is "what about frameworks >> that use reflection to drive semantics." But that one kind of answers >> itself when you think about it, so I'll just skip ahead now.) >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scolebourne at joda.org Wed Oct 15 23:22:12 2025 From: scolebourne at joda.org (Stephen Colebourne) Date: Thu, 16 Oct 2025 00:22:12 +0100 Subject: Primitive type patterns - an alternative approach (JEP 507) In-Reply-To: References: Message-ID: On Wed, 15 Oct 2025 at 16:39, Brian Goetz wrote: > Let's see how I did ... pretty close! You wanted to go _even more explicit_ than (b) -- by explicitly naming both types Only one type is named in most cases - the FromType is optional. It would be perfectly possible to implement the proposal without the FromType part, but there are use cases where it comes in handy. > Zooming out, design almost always involves "lump vs split" choices; do we highlight the specific differences between cases, or their commonality? Another way to express this distinction is "what level of magic is acceptable?" > For those who didn't go and read JLS 5, here's the set of conversions that are permitted in a casting context: Although there are 15 different conversions listed, there are only 4 basic things (the rest is language specification noise and some historic oddities): - widening, which no-one worries about - boxing/unboxing, which is a convenience - type checks, which can throw CCE - type conversion, which can silently fail Type checks only ever have one definition - there is never any ambiguity about whether A is a subtype of B. By contrast, type conversion is an order of magnitude more complex Given `var d = Decimal.of("42.5")`, what should `(int) d` return? * 42 because it truncates * 42 because rounds half-down (or floor, or half-even) * 43 because it rounds half-up (or ceiling, or up) * throw because it is a lossy conversion * a compile error The compile error option is not at all unreasonable - why should the language pick which of the 8 rounding modes is used? Maybe developers should be forced to use a method to convert, where the mode can be specified. My proposal was not that extreme, because it does allow a default answer (throw if lossy), but argues that it needs to be called out. Circling back to "what level of magic is acceptable?". The trouble here is that partial type patterns and unconditional type patterns already share the same syntax, and that is bad enough. To add in type conversions is just way too far. This isn't lumping, it is magic. Trying to read and decipher code with merged type checks and type conversions in patterns simply isn't possible without an excessive amount of external context, which is potentially very difficult to do in PRs for example. All my proposal really argues is that alternative syntaxes are available that make the code readable again. With ~ the visible syntax question becomes "if I can convert to an int ....". Other options are available. > But, this is nothing new in Java! This happens with overloading: > m(x) Method overloading is usually done to enable type conversion (ironic, huh?). And it is rarely confusing because the overloads represent the same thing precisely because they *are* type conversion. With method overloading, the feature is about unification - bringing different types together to a single code path. With patterns, overloads perform the opposite role of routing to different code paths. That is why it is much, much more important to know which branch is being taken in a switch than which method overload is being used. Stephen From archie.cobbs at gmail.com Thu Oct 16 00:10:28 2025 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Wed, 15 Oct 2025 19:10:28 -0500 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> <36edf5e8-791e-4d31-8e7d-05cea3eabd22@oracle.com> Message-ID: On Wed, Oct 15, 2025 at 11:34?AM Manu Sridharan wrote: > This is all mostly possible via the Checker Framework and similar > approaches. > In the spirit of due diligence, I am attempting to implement something like "WHAT I WANT" using the checker framework. Currently I'm battling a poor progress/confusion ratio. You wouldn?t need @SuppressWarnings annotations for validation either, due > to type refinement . > And, for this type of property, where you?re essentially trying to > introduce new subtypes of an existing type and then enforce type > compatibility at assignments, the implementation effort to write the > checker is pretty low. > Let me know offline if you (or anyone else) is interested in helping me prototype something (I have a primordial github project). Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From atonita at proton.me Thu Oct 16 08:41:18 2025 From: atonita at proton.me (Aaryn Tonita) Date: Thu, 16 Oct 2025 08:41:18 +0000 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> <36edf5e8-791e-4d31-8e7d-05cea3eabd22@oracle.com> Message-ID: <_hpWIJCY4KKm-omhziKAtP-HzavBj0EWmR2PVZa236HpRJNKeovu8Ni1PtaVMTVk1-Tawmxtiz7gGQx_b70MmIXJ0KCEBRMwtdWnxx8Y6g8=@proton.me> I somewhat feel like these side car static type analysers fail to catch on in Java because annotations are too flexible and thus poorly suited for catching type information. In comparison with python where there are many tools but there may only be a single type annotation, in Java there are many tools with multiple overlapping type annotations that can each be redundantly applied while there is only a single language level type ascribed that doesn't support restrictions without OOP ceremony. If there was a dedicated unique additional type annotation maybe tools would attempt to be more interoperable with one another but also maybe not. Today we have tools like Checker competing with JSpecify and the tools that came before it, and JSpecify even pointing at the sad state of affairs in a stackoverflow question (on the level of null restriction) where there are many competing and poorly interoperating tools. When the interoperability story is complex, the choice is hard to make and living with the lack of restrictions seems ok (alternately one can make between simply guarding like usual or going with the OOP ceremony). On Thursday, October 16th, 2025 at 2:12 AM, Archie Cobbs wrote: > On Wed, Oct 15, 2025 at 11:34?AM Manu Sridharan wrote: > >> This is all mostly possible via the Checker Framework and similar approaches. > > In the spirit of due diligence, I am attempting to implement something like "WHAT I WANT" using the checker framework. Currently I'm battling a poor progress/confusion ratio. > >> You wouldn?t need @SuppressWarnings annotations for validation either, due to [type refinement](https://checkerframework.org/manual/#type-refinement). And, for this type of property, where you?re essentially trying to introduce new subtypes of an existing type and then enforce type compatibility at assignments, the implementation effort to write the checker is pretty low. > > Let me know offline if you (or anyone else) is interested in helping me prototype something (I have a primordial github project). > > Thanks, > -Archie > -- > > Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From angelos.bimpoudis at oracle.com Thu Oct 16 14:47:05 2025 From: angelos.bimpoudis at oracle.com (Angelos Bimpoudis) Date: Thu, 16 Oct 2025 14:47:05 +0000 Subject: Towards a Future-Proof Chapter 5: A JLS Refactoring Initiative Message-ID: Hello amber-dev, amber-spec-experts, I wanted to share some emerging thoughts and possible directions regarding the flexibility and expressiveness of conversions in the Java language. Given recent interest in the positioning of conversions in JEP 507, it seemed a timely opportunity to share this ongoing analysis. This is especially relevant as Java continues to evolve its pattern matching facilities and looks ahead to Project Valhalla. The framework proposed by JEP 507 is more than just a cleanup of existing rules. It's a necessary precursor as the language moves towards enabling user-defined conversions and supporting richer numeric types. These planned features will fundamentally expand how values can be expressed and composed in Java. For a look ahead, I recommend these recent JVMLS presentations: * Growing the Java Language * Paths to Support Additional Numeric Types on the Java Platform As we continue to evolve towards value types and numeric types, it becomes clear that the framework for conversions, which stood us well when we only had eight primitive types, needs to be shored up somewhat. Attached is a rough exploration of the issues surrounding shoring up the framework for conversions so that it can support these new directions for Java: * Towards a Future-Proof Chapter 5: A JLS Refactoring Initiative Best regards, Angelos -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Thu Oct 16 16:22:38 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Thu, 16 Oct 2025 12:22:38 -0400 Subject: Primitive type patterns - an alternative approach (JEP 507) In-Reply-To: References: Message-ID: <805915f9-6fc2-4118-be49-e5363b329be9@oracle.com> >> Zooming out, design almost always involves "lump vs split" choices; do we highlight the specific differences between cases, or their commonality? > Another way to express this distinction is "what level of magic is acceptable?" Heh, that's a pretty loaded way to express it. Having semantics depend on static types is not "magic", whether or not the types are repeated at every line of code they are used. When we say ? ? int x = (int) anObject vs ? ? int x = (int) aLong the two casts to int have different semantics _based on the type of what is being cast_; one will attempt to cast the Object to Integer and then unbox (possibly CCEing), and the other will attempt to narrow the long to an int (possibly losing data).? And yet, they both appear to be "the same thing" -- casting a value to int. Your arguments about JEP 507 could equally well be applied to the semantic difference between pair of statements above.? So why is this not a disaster, or even "magic"?? Because static types are a core part of the Java language!? Appealing to them, even if they are explicitly denoted only somewhere else, is not magic. It would be a useful thought experiment to ask yourself why the above two examples don't offend you to the point of proposing new syntax.? Because all the implicit, type-driven variation in semantics that is present in `anObject instanceof int x` is equally present in `int x = anObject`.? (In fact, it should be, because they are _the same thing_.) So no, I can't agree that this is about "magic" at all.? Let's use the right word: "implicit".? Your core argument is that "too much is left implicit here, and therefore no one will be able to understand what is going on."??These sort of "it's OK now, but if we do one more thing it will get out of hand" arguments remind me of previous arguments around previous features that involved new implicit behaviors driven by static types, which were predicted by their detractors to be raging disasters, and which turned to be ... fine. Example 1: autoboxing Prior to Java 5, there was no implicit or explicit conversion between `int` and `Integer` (not even casting); boxing and unboxing were done manually through `new Integer(n)`, `Integer.valueOf(n)`, and `Integer::intValue`.? In Java 5, we added boxing and unboxing conversions to the list of conversions, and also, somewhat more controversially, supported "implicit" boxing and unboxing conversions (more precisely, allowing them in assignment/method context) as well as "explicit" boxing and unboxing conversions (casting). Of course, some people cheered ("yay, less ceremony") but others gasped in horror.? An assignment that ... can throw?? What black magic is this?? This will make programs less reliable!? ?And the usual bargaining: "why does this have to be implicit, what's wrong with requiring an explicit cast?"? 20 years later, this may seem comical or hard to believe, but there was plenty of controversy over this in its day. While the residue of complexity this left in the spec was nontrivial (added to the complexity of both conversions and overload selection, each nontrivial areas of the language), overall this was a win for Java programmers.? The static type system was still in charge, clearly defining the semantics of our programs, but the explicit ceremony of "go from int to Integer" receded into the background. The world didn't end; Java programs didn't become wildly less reliable.? And if we asked people today if they wanted to go back, the answer would surely be a resounding "hell, no." Example 2: local variable type inference (`var`) The arguments on both sides of this were more dramatic; its supporters went on about "drowning in ceremony", while its detractors cried "too much! too much!", warning that Java codebases would collapse into unreadability due to bad programmers being unable to resist the temptation of implicitness.? Many strawman examples were offered as evidence of how unreadable Java code would become.? (To be fair, these people were legitimately afraid for how such a feature would be used, and how this would affect their experience of programming in Java, fearing it would be overused or abused, and that we wouldn't be able to reclose Pandora's box.? (But some were just misguided mudslinging, of course; the silliest of them was "you're turning Java into Javascript", when in fact type inference is based entirely on ... static types.? Unfortunately there is no qualification exam for making strident arguments.)) Fortunately, some clearer arguments eventually emerged from this chaos.? People pointed out that for many local variables, the _variable name_ carried far more information than the variable type, and that the requirement to manifestly type all variables led to distortions in how people coded (such as leading to more complicated and deeply nested expressions, that could have benefited by pulling out subexpressions into named variables). In the end, it was mostly a nothingburger.? Developers learned to use `var` mostly responsibly, and there was no collapse in maintainability or readability of Java code.? The fears were unfounded. One of the things that happens when people react to new features that are not immediately addressing a pain point that they happen to be personally in, is to focus on all the things that might go wrong.? This is natural and usually healthy, but one of the problems with this tendency is that in this situation, where the motivation of the feature doesn't speak directly to us, we often don't have a realistic idea of how and when and how often it will come up in real code.? In the absence of a concrete "yes, I can see 100 places I would have used this yesterday", we replace those with speculative, often distorted examples, and react to a fear of the unrealistic future they imply. Yes, it is easy to imagine cases where something confusing could arise out of "so much implicitness" (though really, its not so much, its just new flavors of the same old stuff.)? But I will note that almost all of the example offered involve floating point, which mainstream Java developers _rarely use_.? Which casts some doubt on whether these examples of "look how confusing this is" are realistic. (This might seem like a a topic change, but it is actually closer to the real point.)? At this point you might be tempted to argue "but then why don't we 'just' exclude floating point from this feature?" And the reason is: that would go against the _whole point_ of this feature.? This JEP is about _regularization_.? Right now, there are all sorts of random and gratuitous restrictions about what types can be used where; we can only use reference types in instanceof, we can't switch on float, constant case switches are not really patterns yet, we can't use `null` in nested pattern context, etc etc.? Each of these restrictions or limitations may have been individually justifiable at the time, but in the aggregate, they are a pile of pure accidental complexity, make the language harder to use and learn, create unexpected interactions and gaps, and make it much much harder to evolve the language in the ways that Valhalla aims to, allowing the set of numeric types that can "work like primitives" to be expanded.? We can get to a better place, but we can't bring all our accidental complexity with us. When confronted with a new feature, especially one that is not speaking directly to pain points one is directly experiencing, the temptation is to respond with a highly localized focus, one which focuses on taking the claimed goals of this feature and trying to make it "safer" or "simpler" (which usually also means "smaller".) But such localized responses often have two big risks: they risk missing the point of the feature (which is easy if it is already not speaking directly to you), and they risk adding new complexity elsewhere in the language in aid of "fixing" what seems "too much" about the feature in front of you. This feature is about creating level ground for future work to build on -- constant patterns, numeric conversions between `Float16` and `double`, etc.? But to make these features possible, we first have to undo the accidental complexity of past hyperlocal feature design so that there can be a level ground that these features can be built on; the ad-hoc restrictions have to go.? This JEP may appear to create complicated new situations (but really, just less familiar ones), but it actually makes instanceof and switch _simpler_ -- both by by removing restrictions and by defining everything in terms of a small number of more fundamental concepts, rather than a larger pile of ad-hoc rules and restrictions.? Its hard to see that at first, so you have to give it time to sink in. *If* it turns out, when we get to that future, that things are still too implicit for Java developers to handle, we still have the opportunity _then_ to offer new syntactic options for finer control over conversions and partiality.? But I'm not compelled by the idea of going there preemptively (and I honestly don't think it is actually going to be a problem.) > Circling back to "what level of magic is acceptable?". The trouble > here is that partial type patterns and unconditional type patterns > already share the same syntax, and that is bad enough. To add in type > conversions is just way too far. This isn't lumping, it is magic. > > Trying to read and decipher code with merged type checks and type > conversions in patterns simply isn't possible without an excessive > amount of external context, which is potentially very difficult to do > in PRs for example. > > All my proposal really argues is that alternative syntaxes are > available that make the code readable again. With ~ the visible syntax > question becomes "if I can convert to an int ....". Other options are > available. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gavin.bierman at oracle.com Thu Oct 16 17:29:03 2025 From: gavin.bierman at oracle.com (Gavin Bierman) Date: Thu, 16 Oct 2025 17:29:03 +0000 Subject: Primitive type patterns - an alternative approach (JEP 507) In-Reply-To: References: Message-ID: <78A5FA7C-E078-4CB2-8659-8F4D071528F2@oracle.com> Hi Stephen, This is a nice document and discussion - thanks. To understand where JEP 507 is coming from, I think it might be useful to consider a slightly different mental model; let me call it "conversions world". The fundamental thing we are dealing with is situations when we want a value of type A, and we have an expression of type B. In conversions world that's simple, we simply apply the "B to A conversion" to the expression. Where do we do this? In Java, in lots of places - in argument positions of method calls, in assignment expressions, in casts, in pattern matching, in numerical operations, ... To confuse things, Java has different conversions for each different use - that's a bit odd (C# is much simpler in this respect, for example) - but that's the world we have inherited. So, just to make it concrete, T t = e; We know that e has static type S. So we figure out a conversion from S to T - let's call it C - and then the compiler bakes it in: T t = C[e]; (I'm going to use square brackets to mean "I have applied the conversion C to the expression e". Note also that if we couldn't figure out a conversion from S to T, it is a compile-time error!) In conversions world, we do this ALL THE TIME, EVERYWHERE. For example: String s = ...; Object o = s; ---> Object o = String-to-Object[s] (What is the String-to-Object conversion? Operationally it's the identity function! We can obviously optimise...) But it also works here too: int i = ...; ...(byte) i... -->. ...int-to-byte[i] We have lots of these conversion functions in Java, just look at Chapter 5! As you rightly observe, *today* they have the following property: - The conversions between reference types are essentially functions that are less than the identity, i.e. they either return the object that have been given or they throw. - The conversions between primitive types are quite different in that many of them actually change the representation; e.g. they take an 8 bit value and return a 32 bit one. What you suggesting, I believe, is to cast this difference in stone *and* make it concrete in syntax. Unfortunately, I think that is a very serious restriction. We may in the future want to define conversions between reference types that *do* change the representation, e.g. think of a conversion from one value class to another (that is not related by subclassing). We may want to define that conversion using type classes. So this future world is a world of generalized conversions. Users can write conversions perhaps. Maybe we can even get rid of these magical Foo-to-Bar[-] conversions, and write them in a type class somewhere. But it is just a slight generalization of the conversions world we have today. Conversions are, both today and in the future, always defined with respect to their static types. When you write: (Foo)e : you need to know the static type of e to figure out which conversion to type Foo the compiler will insert. Undoubtedly, Tagir will give us a wonderful IDE experience so you can figure out the conversion :-) If it's one you've written, I'm sure the declaration will be a click away. But the mental model is that it is just a lump of code that converts a value from one type to another. Everything is a conversion. Hope that helps. Gavin PS: There is extensive academic work in this area. "Subtyping as coercions" is a formal model where a type A is a subtype of type B if there's an implicit coercion function c from A to B. (That's actually the way the JLS views things, if you squint.) I think I learnt this from a bunch of papers by Zhaohui Luo from the mid-1990s. This approach scales to all sorts of crazy powerful type theories, and provides a powerful framework allowing all sorts of important program rewriting scenarios can be recast as type-directed coercion insertion. Recommended reading if you have the time. > On 15 Oct 2025, at 07:34, Stephen Colebourne wrote: > > In the vein of JEP feedback, I believe it makes sense to support > primitive types in pattern matching, and will make sense to support > value types in the future. And I can see the great work that has been > done so far to enable this. > > Unfortunately, I hate the proposed syntactic approach in JEP 507. It > wasn't really clear to me as to *why* I hated the syntax until I had > enough time to really think through what Java does in the area of > primitive type casts, and why extending that as-is to pattern matching > would IMO be a huge mistake. > > (Please note that I fully grasp the pedagogical approach wrt > instanceof defending an unsafe cast, but no matter how much it is > repeated, I don't buy it, and I don't believe it is good enough by > itself.) > > To capture my thoughts, I've written up how Java's current approach to > casts leads me to an alternative proposal - type conversion casts, and > type conversion patterns: > https://tinyurl.com/typeconvertjava1 > > thanks > Stephen From alber84ou at gmail.com Thu Oct 16 18:15:34 2025 From: alber84ou at gmail.com (=?UTF-8?Q?Alberto_Otero_Rodr=C3=ADguez?=) Date: Thu, 16 Oct 2025 20:15:34 +0200 Subject: New methods in java.util.Map Message-ID: Hi, I have a suggestion for new methods in java.util.Map. As of Java 25, we have the method: default V getOrDefault(Object key, V defaultValue) However, I think it could be interesting a method where the defaultValue is passed as a lambda function like this one: default V getOrDefault(Object key, Function defaultValueFunction) Also, other two new methods might be interesting if you want to get a value from a map, but if the key doesn't exist you want to insert that value in the map and return it: default V getOrPut(Object key, V defaultValue) default V getOrPut(Object key, Function defaultValueFunction) What do you think? Regards, Alberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at upliftinglemma.net Thu Oct 16 19:14:46 2025 From: chris at upliftinglemma.net (Chris Bouchard) Date: Thu, 16 Oct 2025 15:14:46 -0400 Subject: New methods in java.util.Map In-Reply-To: References: Message-ID: Alberto, I believe your getOrPut methods already exist as putIfAbsent and computeIfAbsent, unless I'm missing a subtle difference. On Thu, Oct 16, 2025, 14:20 Alberto Otero Rodr?guez wrote: > Also, other two new methods might be interesting if you want to get a > value from a map, but if the key doesn't exist you want to insert that > value in the map and return it: > > default V getOrPut(Object key, V defaultValue) > > default V getOrPut(Object key, Function > defaultValueFunction) > Chris > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scolebourne at joda.org Thu Oct 16 21:18:58 2025 From: scolebourne at joda.org (Stephen Colebourne) Date: Thu, 16 Oct 2025 22:18:58 +0100 Subject: Primitive type patterns - an alternative approach (JEP 507) In-Reply-To: <78A5FA7C-E078-4CB2-8659-8F4D071528F2@oracle.com> References: <78A5FA7C-E078-4CB2-8659-8F4D071528F2@oracle.com> Message-ID: On Thu, 16 Oct 2025 at 18:29, Gavin Bierman wrote: > What you suggesting, I believe, is to cast this difference in stone *and* make it concrete in syntax. > Unfortunately, I think that is a very serious restriction. We may in the future want to define conversions between reference types that *do* change the representation, e.g. think of a conversion from one value class to another (that is not related by subclassing). The document covers and welcomes the idea that there are type conversions between value types (the new syntax isn't about primitive types, it is about type conversion). This is driven from the observation that type conversions are significantly more complex things than type checks, and that distinction is worthy of being highlighted. thanks for the comments Stephen From scolebourne at joda.org Thu Oct 16 22:11:31 2025 From: scolebourne at joda.org (Stephen Colebourne) Date: Thu, 16 Oct 2025 23:11:31 +0100 Subject: Primitive type patterns - an alternative approach (JEP 507) In-Reply-To: <805915f9-6fc2-4118-be49-e5363b329be9@oracle.com> References: <805915f9-6fc2-4118-be49-e5363b329be9@oracle.com> Message-ID: On Thu, 16 Oct 2025 at 17:22, Brian Goetz wrote: > Because all the implicit, type-driven variation in semantics that is present in `anObject instanceof int x` is equally present in `int x = anObject`. (In fact, it should be, because they are _the same thing_.) Using one syntax for all kinds of cast is manageable, because the control flow is *unconditional*. Using one syntax in patterns for unconditional type checks, partial type checks and partial type conversions is atrocious, because the control flow is *conditional*. With conditional control flow developers absolutely 100% need to be able to read code and know which branch will be taken. But the current chosen syntax deliberately sets out to obscure the information needed to read the code. I understand how you want one syntax to be the inverse of the other. But as per the previous two paragraphs, they operate in different problem spaces, and different syntax is entirely justified. > This feature is about creating level ground for future work to build on -- constant patterns, numeric conversions between `Float16` and `double`, etc. But to make these features possible, we first have to undo the accidental complexity of past hyperlocal feature design so that there can be a level ground that these features can be built on; the ad-hoc restrictions have to go. And I agree with all of that. As above, the problem is that the level ground being created (in patterns) is just too flat. > Example 2: local variable type inference (`var`) > In the end, it was mostly a nothingburger. Developers learned to use `var` mostly responsibly, and there was no collapse in maintainability or readability of Java code. The fears were unfounded. var is still hugely controversial. And many developers and shops refuse to use it, with fear-based responses. > But I will note that almost all of the example offered involve floating point, which mainstream Java developers _rarely use_. I don't use any floating point examples, precisely because they are a red herring obscuring the real issues. > *If* it turns out, when we get to that future, that things are still too implicit for Java developers to handle, we still have the opportunity _then_ to offer new syntactic options for finer control over conversions and partiality. Surely a better option would be to put this feature on the back burner until there are value types and type classes that actually justify it. I strongly suspect that approach would also yield much better feedback on whether the outcome is readable or not. And I think the downsides of delaying are minimal. Thanks as always for your thoughts, Stephen From ethan at mccue.dev Fri Oct 17 03:05:35 2025 From: ethan at mccue.dev (Ethan McCue) Date: Thu, 16 Oct 2025 23:05:35 -0400 Subject: New methods in java.util.Map In-Reply-To: References: Message-ID: I don't have anything intelligent to add, but I assume either * It's a good / okay idea, not high priority * putIfAbsent and computeIfAbsent are seen to be enough for lazy operations * Something much more subtle and/or annoying about how it will affect the universe of map implementations. Either way, just so there is something concrete to talk about: import module java.base; public final class Maps { private Maps() { } private static final Object DEFAULT = new Object(); @SuppressWarnings({"unchecked","rawtypes"}) public static V getOrCompute( Map m, Object key, Function compute ) { Object value = ((Map) m).getOrDefault(key, DEFAULT); if (value == DEFAULT) { return compute.apply((K) key); } else { return (V) value; } } } (Note the three needed casts - this might be difficult for the same reason Map#get takes an Object and not a K) On Thu, Oct 16, 2025 at 7:20?PM Chris Bouchard wrote: > Alberto, > > I believe your getOrPut methods already exist as putIfAbsent and > computeIfAbsent, unless I'm missing a subtle difference. > > On Thu, Oct 16, 2025, 14:20 Alberto Otero Rodr?guez > wrote: > >> Also, other two new methods might be interesting if you want to get a >> value from a map, but if the key doesn't exist you want to insert that >> value in the map and return it: >> >> default V getOrPut(Object key, V defaultValue) >> >> default V getOrPut(Object key, Function >> defaultValueFunction) >> > > Chris > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Fri Oct 17 11:06:06 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Fri, 17 Oct 2025 07:06:06 -0400 Subject: New methods in java.util.Map In-Reply-To: References: Message-ID: I'll just add: corelibs-dev is the right place for this discussion, not amber-dev. On 10/16/2025 11:05 PM, Ethan McCue wrote: > I don't have anything intelligent to add, but I assume either > > * It's a good / okay idea, not high priority > * putIfAbsent and computeIfAbsent are seen to be enough for lazy > operations > * Something much more subtle and/or annoying about how it will affect > the universe of map implementations. > > Either way, just so there is something concrete to talk about: > > import module java.base; > > public final class Maps { > ? ? private Maps() { > ? ? } > > ? ? private static final Object DEFAULT = new Object(); > > ? ? @SuppressWarnings({"unchecked","rawtypes"}) > ? ? public static V getOrCompute( > ? ? ? ? ? ? Map m, > ? ? ? ? ? ? Object key, > ? ? ? ? ? ? Function compute > ? ? ) { > ? ? ? ? Object value = ((Map) m).getOrDefault(key, DEFAULT); > ? ? ? ? if (value == DEFAULT) { > ? ? ? ? ? ? return compute.apply((K) key); > ? ? ? ? } > ? ? ? ? else { > ? ? ? ? ? ? return (V) value; > ? ? ? ? } > ? ? } > } > > (Note the three needed casts - this might be difficult for the same > reason Map#gettakes an Objectand not a K) > > On Thu, Oct 16, 2025 at 7:20?PM Chris Bouchard > wrote: > > Alberto, > > I believe your getOrPut methods already exist as putIfAbsent and > computeIfAbsent, unless I'm missing a subtle difference. > > On Thu, Oct 16, 2025, 14:20 Alberto Otero Rodr?guez > wrote: > > Also, other two new methods might be interesting if you want > to get a value from a map, but if the key doesn't exist you > want to insert that value in the map and return it: > > default V?getOrPut(Object key, V defaultValue) > > default V?getOrPut(Object key,??Function V> defaultValueFunction) > > > Chris > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.reinhold at oracle.com Mon Oct 27 16:18:30 2025 From: mark.reinhold at oracle.com (Mark Reinhold) Date: Mon, 27 Oct 2025 16:18:30 +0000 Subject: New candidate JEP: 530: Primitive Types in Patterns, instanceof, and switch (Fourth Preview) Message-ID: <20251027161829.029E9D00@naskeag.niobe.net> https://openjdk.org/jeps/530 Summary: Enhance pattern matching by allowing primitive types in all pattern contexts, and extend instanceof and switch to work with all primitive types. This is a preview language feature. - Mark From rob.ross at gmail.com Fri Oct 31 02:37:59 2025 From: rob.ross at gmail.com (Rob Ross) Date: Thu, 30 Oct 2025 19:37:59 -0700 Subject: Ad hoc type restriction In-Reply-To: <_hpWIJCY4KKm-omhziKAtP-HzavBj0EWmR2PVZa236HpRJNKeovu8Ni1PtaVMTVk1-Tawmxtiz7gGQx_b70MmIXJ0KCEBRMwtdWnxx8Y6g8=@proton.me> References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> <36edf5e8-791e-4d31-8e7d-05cea3eabd22@oracle.com> <_hpWIJCY4KKm-omhziKAtP-HzavBj0EWmR2PVZa236HpRJNKeovu8Ni1PtaVMTVk1-Tawmxtiz7gGQx_b70MmIXJ0KCEBRMwtdWnxx8Y6g8=@proton.me> Message-ID: I scanned this thread but haven't yet seen any references to "Programming by Contract", so I thought I'd just mention and remind everyone about it. But first, please don't make the "totally awesome" an enemy of the "really really useful right now." A Null/Not-null operator would be very helpful all by itself and I don't think that feature needs to wait for the Best Parameter Checking System Ever. My contribution example: //With apologies to Demeter. //API: FooType foo = obj.nested1().nested2().fooProvider().getFoo(); //CURRENTLY FooType foo = null; if (obj != null && obj.nested1() != null && obj.nested1().nested2() != null && obj.nested1().nested2().fooProvider() != null) { foo = obj.nested1().nested2().fooProvider().getFoo(); } //WANTED FooType foo = obj?.nested1()?.nested2()?.fooProvider()?.getFoo(); A not-null assertion is also nice to have. ... // The underscore is here only to make the exclamation point more clear in the variable name result = obj_!.getThing(); Ok, now on to Program/Design by contract. This is a very useful technique. Whenever I've used it, it has always been in an ad-hoc fashion. But the "Checkers" being discussed above really seemed similar in my mind to this concept. Being able to specify your pre-conditions, post-conditions, and invariants in one place makes it easy to: 1. Discover the rules easily 2. Communicate clear intent to caller 3. Make it easier to reason about state in the called function 4. Makes the code easier to debug and maintain How to implement them? Well first I feel like these Conditions could apply both to a specific argument or return value, and/or apply to the function as a whole. I.e., FooType doSomething( T1 t1, T2 t2) {...} And perhaps t1 is an Enum and t2 can only take on certain values based on the value of that Enum. So individual parameters could have a Contract and/or the function could have a Contract that would be some type of implementation of a Mediator pattern. That's the high level idea. As has been discussed, Annotations are just post-it notes on Symbols that no one is required to read. So if they are not available, some other mechanism would need to be used. Now, for my own method parameter verification, I used to check and throw RuntimeExceptions, as I viewed these as programming errors that should all be fixed in production code, obviating the need for the check and the exceptions in production. For code in the critical path, I would eventually remove these checks so runtime performance was not taking the hit. By the time I did this, I had robust unit tests that would ensure these kinds of errors could be caught at testing time. However, tests can't catch new bugs that have no tests for them yet. And developers new to the code (which includes me about 4 months after I have written it), could still introduce bugs by passing in unexpected data, necessitating the check-and-exception code to be reintroduced to the buggy methods. I have since discovered the power of assertions, via the simple `assert` keyword. Now I have the best of both worlds. I never have to delete my argument checks, so they are always working to enforce the contract. I can enable them in development and disable them in production so they aren't a performance hit. I wonder if this new Contract feature might do something similar? Maybe even, dare I say it, allow injecting asserts into actual byte code!!! (I know, I know, the Scandal!!) Then the developer has the final say on if those asserts are acted upon and in what environments it's appropriate. If not by asserts then perhaps by some new mechanism. What I have describe so far are really simple checks on the domain and not-nullness, etc, of method arguments. It has assumed that you can completely enumerate the valid and not valid ranges of arguments at compile time. That will not always be possible, especially for complex invariants that depend on the runtime state. So I again see another dimension for these Contracts, `runtime` and `compile time`. Compile time Contracts could likely be fully implemented with the existing assert mechanism. And they could be removed from the bytecode in production environments, so there would be no performance cost. Runtime Contracts would still require checking at runtime, and could introduce a performance penalty, but I'm not sure this would be any greater than what we are already doing in robust, fault-tolerant production code. And maybe Contracts are just lambdas under the hood that are passed the method arguments before the method is invoked and the developer is completely on their own on implementing the code, deciding when to use an assert or when to use check-throw. And for my final trick, I think Contract programming would be very useful, and so does everyone who has replied to this thread. Perhaps useful enough to justify a new type of processor to be added to Java? One that could affect bytecode at compile time? The Contract Processor perhaps? With Annotations java got the @ operand. I hear that ? Is available and will work for free! public void dial(?PhoneNumber String Number) {...} - Rob P.S. the ? was just for dramatic effect. In a mono-spaced code font it looks terrible and would be hard to distinguish from @. But maybe there are designers that could help with that, perhaps choosing an easy-to-type special symbol in the BMP that looks like the copyright but more legible. On Thu, Oct 16, 2025 at 1:41?AM Aaryn Tonita wrote: > I somewhat feel like these side car static type analysers fail to catch on > in Java because annotations are too flexible and thus poorly suited for > catching type information. In comparison with python where there are many > tools but there may only be a single type annotation, in Java there are > many tools with multiple overlapping type annotations that can each be > redundantly applied while there is only a single language level type > ascribed that doesn't support restrictions without OOP ceremony. If there > was a dedicated unique additional type annotation maybe tools would attempt > to be more interoperable with one another but also maybe not. > > Today we have tools like Checker competing with JSpecify and the tools > that came before it, and JSpecify even pointing at the sad state of affairs > in a stackoverflow question (on the level of null restriction) where there > are many competing and poorly interoperating tools. When the > interoperability story is complex, the choice is hard to make and living > with the lack of restrictions seems ok (alternately one can make between > simply guarding like usual or going with the OOP ceremony). > > On Thursday, October 16th, 2025 at 2:12 AM, Archie Cobbs < > archie.cobbs at gmail.com> wrote: > > On Wed, Oct 15, 2025 at 11:34?AM Manu Sridharan > wrote: > >> This is all mostly possible via the Checker Framework and similar >> approaches. >> > > In the spirit of due diligence, I am attempting to implement something > like "WHAT I WANT" using the checker framework. Currently I'm battling a > poor progress/confusion ratio. > > You wouldn?t need @SuppressWarnings annotations for validation either, >> due to type refinement >> . And, for this >> type of property, where you?re essentially trying to introduce new subtypes >> of an existing type and then enforce type compatibility at assignments, the >> implementation effort to write the checker is pretty low. >> > > Let me know offline if you (or anyone else) is interested in helping me > prototype something (I have a primordial github project). > > Thanks, > -Archie > -- > Archie L. Cobbs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Fri Oct 31 15:10:44 2025 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Fri, 31 Oct 2025 10:10:44 -0500 Subject: Ad hoc type restriction In-Reply-To: References: <651bcb1a-3dba-48c8-b615-740034760d88@oracle.com> <36edf5e8-791e-4d31-8e7d-05cea3eabd22@oracle.com> <_hpWIJCY4KKm-omhziKAtP-HzavBj0EWmR2PVZa236HpRJNKeovu8Ni1PtaVMTVk1-Tawmxtiz7gGQx_b70MmIXJ0KCEBRMwtdWnxx8Y6g8=@proton.me> Message-ID: On Thu, Oct 30, 2025 at 9:38?PM Rob Ross wrote: > Contracts, `runtime` and `compile time`... > All good thoughts. 1. Strongly-typed languages like Java are favored for complex programming tasks because of the rich tools they provide to programmers allowing one to prove to oneself that the code's behavior will be correct. This is Java in a nutshell! 2. In general, such tools will operate at *both* compile/build time and runtime - and they should do so in a coherent way. The observation I'm trying to make in this thread is: Java gives its own tools the ability to operate seamlessly across both domains (compile-time and runtime) but it makes it difficult for the programmer to add their own tools that operate seamlessly across both domains. Heck, even the compile-only tools are difficult - for evidence of this, count how many import com.sun.tools.javac... statements there are in the checker framework... For example, in Java I can say: Foo x = (Foo)y; This will be checked at both compile time (is the compile-time type of y ever assignable to Foo?) and at runtime (CHECKCAST). Moreover it's necessary that it be checked at both compile-time and at runtime because at compile-time we have some, but not all, of the relevant information. Since we want errors to be detected as soon as possible, we detect what we can at compile time and leave the rest to runtime. These two checks are really the same check, just spread across two different domains - they are "coherent". In an ideal world, I should be able to define my own custom type restriction, with both compile-time and runtime components, and have it checked coherently at both compile-time and runtime. An experiment I'm working on (very slowly) is to determine how hard it is to create such a tool using what we have today. The least clunky plan I've come up with so far is: 1. Create a new meta-annotation (e.g., @TypeTag) that you would use like this: /** * Annotates declarations of type {@link String} for which * the value must be non-null and a valid E.164 phone number. */ @Target(ElementType.TYPE_USE) @TypeTag(appliesTo = String.class, validatedBy = PhoneNumberChecker.class) public @interface PhoneNumber { } 2. Create a checker framework plugin extending the Subtyping checker that does the user-defined type restriction stuff at compile time (e.g., @PhoneNumber String x) 3. Create a bytecode analyzer that looks for CAST RuntimeInvisibleTypeAnnotations and inserts corresponding runtime checks (e.g. PhoneNumberChecker.check(x)) 4. Require both of the above to be included in your build (maybe they could be wrapped into a single maven plug-in?) -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: