From brian.goetz at oracle.com Mon Sep 1 20:04:15 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 1 Sep 2025 20:04:15 +0000 Subject: Operator overloading for collections? In-Reply-To: References: Message-ID: This is an understandable question, and one that occurs almost immediately when someone says ?operator overloading.? And, there?s a very clear answer: this is a hard ?no?. Not now, not ever. Operators like `+` are not just ?methods with funny names and more complex priority and associativity rules? The reason that Java even has operators like + in the first place is that _numbers are special_, and that we want code that implements mathematical calculations to look somewhat like the math it is calculating. Numbers are so special, and so important, that a significant portion of foundational language design choices, such as ?what kinds of built-in types do we have?, ?what built-in operators do we support?, and ?what conversions between built-in types do we support? ? are all driven by numerical use cases. In earlier programming languages, such as Fortran, the language design process largely stopped here ? most of the language specification was about numbers and numeric operators. Similarly, the only reason we are _even considering_ operator overloading now is for the same reason ? to support _numerical_ calculations on the new numeric types enabled by Valhalla. (And even this is a potentially dangerous tradeoff.) It is a core Java value that ?reading code is more important than writing code?; all of this complexity was taken on so that reading _numerical_ code could look like the math we already understand. One does?t have to consult the Javadoc to know what `a+b` means when a and b are ints. We know this instinctively, because we have been looking at integer arithmetic using infix operators since we were children. But once you leave this very narrow realm, this often becomes something that benefits writing code more than reading code. And for every ?but this seems perfectly reasonable? example, there are many more ?please, kill me now? examples. For sure, the restrictions we set will be constraining, and they will eliminate some things that seem entirely reasonable. And surely bad programmers will move heaven and earth to ?outwit? the ?stupid? compiler (as they always have.) But the reason we are even doing this is: numbers are special. I realize this is not the answer you were hoping for. On Aug 21, 2025, at 7:01 PM, david Grajales > wrote: Dear Amber team, I hope this message finds you well. I recently watched Brian?s talk "Growing the java language" and I found the discussion on operator overloading particularly interesting. I appreciated how the proposed approach allows operator overloading in a more restricted and safer way, by enforcing algebraic laws. This immediately made me think about potential use cases for collections?for example, using operators for union (+), difference (-), exclusion, and similar operations. However, since this feature is intended to be limited to value classes, and the current collection classes are not value-based, it seems they would not benefit from these new operator overloading capabilities. My question is: are there any plans to enhance the collections framework with value-class-based variants, so that they could take advantage of this feature? Or is this idea (or any other related to the use case) not currently under consideration? I know this is still under discussion, I am just curious about this particular use case. Thank you very much for your work and for your time. Best regards, and always yours. -------------- next part -------------- An HTML attachment was scrubbed... URL: From redio.development at gmail.com Mon Sep 1 21:31:49 2025 From: redio.development at gmail.com (Red IO) Date: Mon, 1 Sep 2025 23:31:49 +0200 Subject: Operator overloading for collections? In-Reply-To: References: Message-ID: I know this is a strong topic for you and I saw many discussions about it especially with you defending the current state. I'm not writing this to try to change your mind. I know your take on this is clear. But for the sake of the open discussion I'm writing this in defense of the great (not new) proposal. But enough with the foreword. I think the numbers are special argument is pretty weak and falls apart as soon as you look at it. Saying numbers and their operators belong to math and are easy to grasp because of familiarity while saying we shouldn't have operators for list, sets and (and Tuples) is denying that both are equally part of basic math as numerals themselves. You can even define numbers by an arrangement of sets. The problem is that mathematical operations for container structures are operators like unions and intersections. Operators that aren't found as a key on our keyboards. We won't change what's on keyboards that's clear. The approach also used in Java and other languages for numerical operations not usually done in math like += or ++ which deviate from the math background of operators is to combine characters to form new operators. Something that has become a common building block for many syntax constructs of modern languages. This approach which is also used in Java directly contradicts the math origin argument for operators. We already have non math operators in Java not even mentioning the infamous string concat operator or the exception union operator. Saying those are different from for example list += element Is purely denial. Sure you can say those where mistakes but I don't see anyone arguing to not use these "mistakes" every day and instead do the proper conversation and concatination with methods. Also the argument that new operators would create situations where it's unclear what it does completely falls apart on the string concat operator which is an arguably worse offender in this regard then many potential operators for container types. You have a magic toString call on types that could have potentially multiple string representations while being messy in conjunction with order of operations with addition. Another common argument why the string concat operator is different is that it was needed because string concatination is so common and it would be too messy and less readable to write it all in methods. Guess what thats the same thing for container types. Having operators for basic operations on containers would improve code readability drastically. In some cases we can clearly reuse existing operators without any chance of confusion. In other cases new operators could be assembled from symbols. Especially operators for collection creation would significantly help improve readability instead of making things harder to read like you suggested. It's not the goal to come up with an operator for each and every basic method of Java classes nor is it the goal to allow custom overloads of user types. Both have many good arguments against them. But it's really time to lift some basic operations that are currently buried in oop syntax hell (like new ArrayList() vs. []) to proper language features with their own operators. Try with resources is an example where it was done and it was a success. It's undeniably just an operator for a close() call in the finalizer while not being a familiar syntax. In my opinion collection operators are equally if not more valid to exist than try with resource. In great regards RedIODev On Mon, Sep 1, 2025, 22:04 Brian Goetz wrote: > This is an understandable question, and one that occurs almost immediately > when someone says ?operator overloading.? > > And, there?s a very clear answer: this is a hard ?no?. Not now, not ever. > > > Operators like `+` are not just ?methods with funny names and more complex > priority and associativity rules? The reason that Java even has operators > like + in the first place is that _numbers are special_, and that we want > code that implements mathematical calculations to look somewhat like the > math it is calculating. Numbers are so special, and so important, that a > significant portion of foundational language design choices, such as ?what > kinds of built-in types do we have?, ?what built-in operators do we > support?, and ?what conversions between built-in types do we support? ? are > all driven by numerical use cases. In earlier programming languages, such > as Fortran, the language design process largely stopped here ? most of the > language specification was about numbers and numeric operators. > > Similarly, the only reason we are _even considering_ operator overloading > now is for the same reason ? to support _numerical_ calculations on the new > numeric types enabled by Valhalla. (And even this is a potentially > dangerous tradeoff.) > > It is a core Java value that ?reading code is more important than writing > code?; all of this complexity was taken on so that reading _numerical_ code > could look like the math we already understand. One does?t have to consult > the Javadoc to know what `a+b` means when a and b are ints. We know this > instinctively, because we have been looking at integer arithmetic using > infix operators since we were children. But once you leave this very > narrow realm, this often becomes something that benefits writing code more > than reading code. And for every ?but this seems perfectly reasonable? > example, there are many more ?please, kill me now? examples. > > For sure, the restrictions we set will be constraining, and they will > eliminate some things that seem entirely reasonable. And surely bad > programmers will move heaven and earth to ?outwit? the ?stupid? compiler > (as they always have.) But the reason we are even doing this is: numbers > are special. > > I realize this is not the answer you were hoping for. > > > > > On Aug 21, 2025, at 7:01 PM, david Grajales > wrote: > > Dear Amber team, > > I hope this message finds you well. > > I recently watched Brian?s talk "Growing the java language" and I > found the discussion on operator overloading particularly interesting. I > appreciated how the proposed approach allows operator overloading in a more > restricted and safer way, by enforcing algebraic laws. > > This immediately made me think about potential use cases for > collections?for example, using operators for union (+), difference (-), > exclusion, and similar operations. However, since this feature is intended > to be limited to value classes, and the current collection classes are not > value-based, it seems they would not benefit from these new operator > overloading capabilities. > > My question is: are there any plans to enhance the collections framework > with value-class-based variants, so that they could take advantage of this > feature? Or is this idea (or any other related to the use case) not > currently under consideration? > > I know this is still under discussion, I am just curious about this > particular use case. > > Thank you very much for your work and for your time. > > Best regards, and always yours. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Mon Sep 1 21:43:18 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 1 Sep 2025 21:43:18 +0000 Subject: Operator overloading for collections? In-Reply-To: References: Message-ID: <8614E4DE-9D5B-4B5C-8224-ACD003A71D10@oracle.com> > I'm not writing this to try to change your mind. I know your take on this is clear. But for the sake of the open discussion I'm writing this in defense of the great (not new) proposal. First, let me point out that we have now strayed WAY outside of the charter of amber-dev, from ?curious question about will X ever happen? (which is in the ?tolerably off topic? category) to ?let me lobby for a massive change in language evolution approach.? > But enough with the foreword. I think the numbers are special argument is pretty weak and falls apart as soon as you look at it. Then perhaps it is a mistake to even consider operator overloading at all. If this is how enough people feel ? that supporting numbers only is so bad that it is worse than doing nothing ? then we have to seriously consider doing nothing. From brian.goetz at oracle.com Mon Sep 1 21:56:38 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 1 Sep 2025 21:56:38 +0000 Subject: Operator overloading for collections? In-Reply-To: <8614E4DE-9D5B-4B5C-8224-ACD003A71D10@oracle.com> References: <8614E4DE-9D5B-4B5C-8224-ACD003A71D10@oracle.com> Message-ID: OK, my bad here. I read this as ?I am _now_ writing this to try to change your mind? (I can only imagine my brain saw the `w` on the front of ?writing? and did a branch mispredict), rather than what you actually wrote. I will go and reread what you wrote with this in mind... > On Sep 1, 2025, at 5:43 PM, Brian Goetz wrote: > >> I'm not writing this to try to change your mind. I know your take on this is clear. But for the sake of the open discussion I'm writing this in defense of the great (not new) proposal. > > First, let me point out that we have now strayed WAY outside of the charter of amber-dev, from ?curious question about will X ever happen? (which is in the ?tolerably off topic? category) to ?let me lobby for a massive change in language evolution approach.? > >> But enough with the foreword. I think the numbers are special argument is pretty weak and falls apart as soon as you look at it. > > Then perhaps it is a mistake to even consider operator overloading at all. If this is how enough people feel ? that supporting numbers only is so bad that it is worse than doing nothing ? then we have to seriously consider doing nothing. > > From david.1993grajales at gmail.com Mon Sep 1 21:58:46 2025 From: david.1993grajales at gmail.com (david Grajales) Date: Mon, 1 Sep 2025 16:58:46 -0500 Subject: Operator overloading for collections? In-Reply-To: References: Message-ID: Thanks for the answer Brian. It's an understandable position. My best regards for all the java development team. El lun, 1 sept 2025 a la(s) 3:04?p.m., Brian Goetz (brian.goetz at oracle.com) escribi?: > This is an understandable question, and one that occurs almost immediately > when someone says ?operator overloading.? > > And, there?s a very clear answer: this is a hard ?no?. Not now, not ever. > > > Operators like `+` are not just ?methods with funny names and more complex > priority and associativity rules? The reason that Java even has operators > like + in the first place is that _numbers are special_, and that we want > code that implements mathematical calculations to look somewhat like the > math it is calculating. Numbers are so special, and so important, that a > significant portion of foundational language design choices, such as ?what > kinds of built-in types do we have?, ?what built-in operators do we > support?, and ?what conversions between built-in types do we support? ? are > all driven by numerical use cases. In earlier programming languages, such > as Fortran, the language design process largely stopped here ? most of the > language specification was about numbers and numeric operators. > > Similarly, the only reason we are _even considering_ operator overloading > now is for the same reason ? to support _numerical_ calculations on the new > numeric types enabled by Valhalla. (And even this is a potentially > dangerous tradeoff.) > > It is a core Java value that ?reading code is more important than writing > code?; all of this complexity was taken on so that reading _numerical_ code > could look like the math we already understand. One does?t have to consult > the Javadoc to know what `a+b` means when a and b are ints. We know this > instinctively, because we have been looking at integer arithmetic using > infix operators since we were children. But once you leave this very > narrow realm, this often becomes something that benefits writing code more > than reading code. And for every ?but this seems perfectly reasonable? > example, there are many more ?please, kill me now? examples. > > For sure, the restrictions we set will be constraining, and they will > eliminate some things that seem entirely reasonable. And surely bad > programmers will move heaven and earth to ?outwit? the ?stupid? compiler > (as they always have.) But the reason we are even doing this is: numbers > are special. > > I realize this is not the answer you were hoping for. > > > > > On Aug 21, 2025, at 7:01 PM, david Grajales > wrote: > > Dear Amber team, > > I hope this message finds you well. > > I recently watched Brian?s talk "Growing the java language" and I > found the discussion on operator overloading particularly interesting. I > appreciated how the proposed approach allows operator overloading in a more > restricted and safer way, by enforcing algebraic laws. > > This immediately made me think about potential use cases for > collections?for example, using operators for union (+), difference (-), > exclusion, and similar operations. However, since this feature is intended > to be limited to value classes, and the current collection classes are not > value-based, it seems they would not benefit from these new operator > overloading capabilities. > > My question is: are there any plans to enhance the collections framework > with value-class-based variants, so that they could take advantage of this > feature? Or is this idea (or any other related to the use case) not > currently under consideration? > > I know this is still under discussion, I am just curious about this > particular use case. > > Thank you very much for your work and for your time. > > Best regards, and always yours. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From redio.development at gmail.com Mon Sep 1 22:07:37 2025 From: redio.development at gmail.com (Red IO) Date: Tue, 2 Sep 2025 00:07:37 +0200 Subject: Operator overloading for collections? In-Reply-To: <8614E4DE-9D5B-4B5C-8224-ACD003A71D10@oracle.com> References: <8614E4DE-9D5B-4B5C-8224-ACD003A71D10@oracle.com> Message-ID: That's completely missing the argument and ignoring 100% of what I said. Nobody wants an overloading apocalypse like in c++. The people simply argue that some parts of the language are equally deserving operators like numbers, for example containers. Nobody is arguing that numbers don't deserve operators. But other features do as well. I can continue naming things in Java that have operators that aren't numbers. Just to name another example the new operator. It could well be 2 methods malloc and init but it has its own operator because it makes things clearer. Defending the status quo with 0 arguments behind it and seemingly not even reading neither the original question nor the comment you are answering to is nothing more then destructive and should not be the climate of a discussion mailing list. On Mon, Sep 1, 2025, 23:43 Brian Goetz wrote: > > I'm not writing this to try to change your mind. I know your take on > this is clear. But for the sake of the open discussion I'm writing this in > defense of the great (not new) proposal. > > First, let me point out that we have now strayed WAY outside of the > charter of amber-dev, from ?curious question about will X ever happen? > (which is in the ?tolerably off topic? category) to ?let me lobby for a > massive change in language evolution approach.? > > > But enough with the foreword. I think the numbers are special argument > is pretty weak and falls apart as soon as you look at it. > > Then perhaps it is a mistake to even consider operator overloading at > all. If this is how enough people feel ? that supporting numbers only is > so bad that it is worse than doing nothing ? then we have to seriously > consider doing nothing. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Mon Sep 1 22:12:59 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 1 Sep 2025 22:12:59 +0000 Subject: Operator overloading for collections? In-Reply-To: References: Message-ID: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> To shed some additional light on this, the argument here is not _only_ philosophical. There is also a complexity and semantic argument against. Complexity. Supporting operators for things like + Is probably an order of magnitude more complexity than the proposal I sketched at JVMLS, because this would also require the ability to declare _new signatures_ for operators. The JLS currently defines the set of operators, their precedence, and their associativity, as well as their applicability. We treat + as a (T, T) -> T function, where T is one of seven primitives that isn?t boolean. Adding _more_ types to that set is a far simpler thing than also saying that the operator can have an arbitrary function signature, such as `> (C, T) -> C` (as would be needed for this example.). Now, we need syntax and type system rules for how you would declare such operators, plus more complex rules for resolving conflicts. And I suspect it wouldn?t be long before someone started saying ?and we?ll need to control the precedence and associativity to make it make sense? too. This is a MUCH bigger feature, and its already big. For a goal that seems marginal, at best. Semantics. In the current model, we can tie the semantics of + to an algebraic structure like a semigroup, which can even express constraints across operators (such as the distributive rule.). This means that users can understand the semantics (at least in part) of expressions like `a+b` without regard for the type. But, if we merely treated operators as ?methods with funny names? and allowed them to be arbitrarily defined, including with asymmetric argument types, different return types, etc, then no one knows what + means without looking it up. Seems a pretty bad trade ? all the semantics for a little extra expressiveness. On Sep 1, 2025, at 5:58 PM, david Grajales > wrote: Thanks for the answer Brian. It's an understandable position. My best regards for all the java development team. El lun, 1 sept 2025 a la(s) 3:04?p.m., Brian Goetz (brian.goetz at oracle.com) escribi?: This is an understandable question, and one that occurs almost immediately when someone says ?operator overloading.? And, there?s a very clear answer: this is a hard ?no?. Not now, not ever. Operators like `+` are not just ?methods with funny names and more complex priority and associativity rules? The reason that Java even has operators like + in the first place is that _numbers are special_, and that we want code that implements mathematical calculations to look somewhat like the math it is calculating. Numbers are so special, and so important, that a significant portion of foundational language design choices, such as ?what kinds of built-in types do we have?, ?what built-in operators do we support?, and ?what conversions between built-in types do we support? ? are all driven by numerical use cases. In earlier programming languages, such as Fortran, the language design process largely stopped here ? most of the language specification was about numbers and numeric operators. Similarly, the only reason we are _even considering_ operator overloading now is for the same reason ? to support _numerical_ calculations on the new numeric types enabled by Valhalla. (And even this is a potentially dangerous tradeoff.) It is a core Java value that ?reading code is more important than writing code?; all of this complexity was taken on so that reading _numerical_ code could look like the math we already understand. One does?t have to consult the Javadoc to know what `a+b` means when a and b are ints. We know this instinctively, because we have been looking at integer arithmetic using infix operators since we were children. But once you leave this very narrow realm, this often becomes something that benefits writing code more than reading code. And for every ?but this seems perfectly reasonable? example, there are many more ?please, kill me now? examples. For sure, the restrictions we set will be constraining, and they will eliminate some things that seem entirely reasonable. And surely bad programmers will move heaven and earth to ?outwit? the ?stupid? compiler (as they always have.) But the reason we are even doing this is: numbers are special. I realize this is not the answer you were hoping for. On Aug 21, 2025, at 7:01 PM, david Grajales > wrote: Dear Amber team, I hope this message finds you well. I recently watched Brian?s talk "Growing the java language" and I found the discussion on operator overloading particularly interesting. I appreciated how the proposed approach allows operator overloading in a more restricted and safer way, by enforcing algebraic laws. This immediately made me think about potential use cases for collections?for example, using operators for union (+), difference (-), exclusion, and similar operations. However, since this feature is intended to be limited to value classes, and the current collection classes are not value-based, it seems they would not benefit from these new operator overloading capabilities. My question is: are there any plans to enhance the collections framework with value-class-based variants, so that they could take advantage of this feature? Or is this idea (or any other related to the use case) not currently under consideration? I know this is still under discussion, I am just curious about this particular use case. Thank you very much for your work and for your time. Best regards, and always yours. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.1993grajales at gmail.com Tue Sep 2 01:45:52 2025 From: david.1993grajales at gmail.com (david Grajales) Date: Mon, 1 Sep 2025 20:45:52 -0500 Subject: Fwd: Feedback about StableValues(Preview) In-Reply-To: References: Message-ID: ---------- Forwarded message --------- De: david Grajales Date: lun, 1 sept 2025 a la(s) 8:43?p.m. Subject: Feedback about StableValues(Preview) To: Subject: Feedback and Questions on JEP 8359894 - Stable Values API Dear Java core-libs development team, Please accept my sincere gratitude and compliments for your ongoing dedication to improving the Java platform. The continuous innovation and thoughtful evolution of Java is truly appreciated by the developer community. I have been experimenting with the Stable Values API (JEP 8359894) in a development branch of a service at my company, and I would like to share some observations and seek your guidance on a particular use case. Current Implementation Currently, I have a logging utility that follows a standard pattern for lazy value computation: class DbLogUtility { private static final ConcurrentMap loggerCache = new ConcurrentHashMap<>(); private DbLogUtility(){} private static Logger getLogger() { var className = Thread.currentThread().getStackTrace()[3].getClassName(); return loggerCache.computeIfAbsent(className, LoggerFactory::getLogger); } public static void logError(){ //.... implementation detail } } Challenge with Stable Values API When attempting to migrate this code to use the Stable Values API, I encountered a fundamental limitation: the API requires keys to be known at compile time. The current factory methods (StableValue.function(Set, Function) and StableValue.intFunction(int, IntFunction)) expect predefined key sets or bounded integer ranges. This design constraint makes it challenging to handle dynamic key discovery scenarios, which are quite common in enterprise applications for: - Logger caching by dynamically discovered class names - Configuration caching by runtime-determined keys - Resource pooling with dynamic identifiers - Etc. Questions and Feedback 1. *Am I missing an intended usage pattern?* Is there a recommended approach within the current API design for handling dynamic key discovery while maintaining the performance benefits of stable values? 2. Would you consider any of these potential enhancements: - Integration of stable value optimizations directly into existing collection APIs (similar to how some methods have been added to List and Map interfaces for better discoverability) - A hybrid approach that provides stable value benefits for dynamically discovered keys 3. Do you envision the Stable Values API as primarily serving compile-time-known scenarios, with dynamic use cases continuing to rely on traditional concurrent collections? Thank you for your time and consideration. I would be grateful for any guidance or clarification you might provide on these questions. If there are planned enhancements or alternative patterns I should consider, I would very much appreciate your insights. Best regards, and always yours. David Grajales C?rdenas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atonita at proton.me Tue Sep 2 06:27:04 2025 From: atonita at proton.me (Aaryn Tonita) Date: Tue, 02 Sep 2025 06:27:04 +0000 Subject: Operator overloading for collections? In-Reply-To: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> References: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> Message-ID: List is the free monoid over the type it collects so it would be quite semantically well formed, the underlying method is also called add. But lists are an interface in Java which really complicates things and constructing a new list each time + is used instead of guiding users to the add method feels like adding a footgun just for some free monoid nerdery. I have similar things to say about the mathematics of Set. As a library writer I would probably add it because I think it's cute, as a language architect / standard library writer it feels like you have the right of it. I do hope you don't bind all operators together though, since the existence of an operator is what separates semigroups from groups and groups from rings and rings from fields... I need *, + and - over matrices, and then I need * on a matrix with a scalar or a vector, but no division in general. And left and right vector multiplication should differ. Then vectors have + and -, a scalar * but no vector multiplication because there's no canonical choice between tensor product and skew product and I didn't want to implement a tensor library at this point (I have seen ^ used for the skew product which matches the wedge symbol generally used but I wouldn't trust it on precedence and I definitely don't want to overload precedence so the semantics feel off). You can sort of implement division over matrices, but there are similar good reasons not to. Lifting math operators like sqrt and exp onto scalar fields will be fun though. Anyway, even if I have to throw a couple UnsupportedOperationExceptions and get runtime errors instead of compiler errors, I will be quite happy. Looking forward to a preview of this. -------- Original Message -------- On 9/2/25 00:14, Brian Goetz wrote: > To shed some additional light on this, the argument here is not _only_ philosophical. There is also a complexity and semantic argument against. > > Complexity. Supporting operators for things like > > + > > Is probably an order of magnitude more complexity than the proposal I sketched at JVMLS, because this would also require the ability to declare _new signatures_ for operators. The JLS currently defines the set of operators, their precedence, and their associativity, as well as their applicability. We treat + as a (T, T) -> T function, where T is one of seven primitives that isn?t boolean. Adding _more_ types to that set is a far simpler thing than also saying that the operator can have an arbitrary function signature, such as `> (C, T) -> C` (as would be needed for this example.). Now, we need syntax and type system rules for how you would declare such operators, plus more complex rules for resolving conflicts. And I suspect it wouldn?t be long before someone started saying ?and we?ll need to control the precedence and associativity to make it make sense? too. This is a MUCH bigger feature, and its already big. For a goal that seems marginal, at best. > > Semantics. In the current model, we can tie the semantics of + to an algebraic structure like a semigroup, which can even express constraints across operators (such as the distributive rule.). This means that users can understand the semantics (at least in part) of expressions like `a+b` without regard for the type. But, if we merely treated operators as ?methods with funny names? and allowed them to be arbitrarily defined, including with asymmetric argument types, different return types, etc, then no one knows what + means without looking it up. Seems a pretty bad trade ? all the semantics for a little extra expressiveness. > >> On Sep 1, 2025, at 5:58 PM, david Grajales wrote: >> >> Thanks for the answer Brian. It's an understandable position. >> >> My best regards for all the java development team. >> >> El lun, 1 sept 2025 a la(s) 3:04?p.m., Brian Goetz (brian.goetz at oracle.com) escribi?: >> >>> This is an understandable question, and one that occurs almost immediately when someone says ?operator overloading.? >>> >>> And, there?s a very clear answer: this is a hard ?no?. Not now, not ever. >>> >>> Operators like `+` are not just ?methods with funny names and more complex priority and associativity rules? The reason that Java even has operators like + in the first place is that _numbers are special_, and that we want code that implements mathematical calculations to look somewhat like the math it is calculating. Numbers are so special, and so important, that a significant portion of foundational language design choices, such as ?what kinds of built-in types do we have?, ?what built-in operators do we support?, and ?what conversions between built-in types do we support? ? are all driven by numerical use cases. In earlier programming languages, such as Fortran, the language design process largely stopped here ? most of the language specification was about numbers and numeric operators. >>> >>> Similarly, the only reason we are _even considering_ operator overloading now is for the same reason ? to support _numerical_ calculations on the new numeric types enabled by Valhalla. (And even this is a potentially dangerous tradeoff.) >>> >>> It is a core Java value that ?reading code is more important than writing code?; all of this complexity was taken on so that reading _numerical_ code could look like the math we already understand. One does?t have to consult the Javadoc to know what `a+b` means when a and b are ints. We know this instinctively, because we have been looking at integer arithmetic using infix operators since we were children. But once you leave this very narrow realm, this often becomes something that benefits writing code more than reading code. And for every ?but this seems perfectly reasonable? example, there are many more ?please, kill me now? examples. >>> >>> For sure, the restrictions we set will be constraining, and they will eliminate some things that seem entirely reasonable. And surely bad programmers will move heaven and earth to ?outwit? the ?stupid? compiler (as they always have.) But the reason we are even doing this is: numbers are special. >>> >>> I realize this is not the answer you were hoping for. >>> >>>> On Aug 21, 2025, at 7:01 PM, david Grajales wrote: >>>> >>>> Dear Amber team, >>>> >>>> I hope this message finds you well. >>>> >>>> I recently watched Brian?s talk "Growing the java language" and I found the discussion on operator overloading particularly interesting. I appreciated how the proposed approach allows operator overloading in a more restricted and safer way, by enforcing algebraic laws. >>>> >>>> This immediately made me think about potential use cases for collections?for example, using operators for union (+), difference (-), exclusion, and similar operations. However, since this feature is intended to be limited to value classes, and the current collection classes are not value-based, it seems they would not benefit from these new operator overloading capabilities. >>>> >>>> My question is: are there any plans to enhance the collections framework with value-class-based variants, so that they could take advantage of this feature? Or is this idea (or any other related to the use case) not currently under consideration? >>>> >>>> I know this is still under discussion, I am just curious about this particular use case. >>>> >>>> Thank you very much for your work and for your time. >>>> >>>> Best regards, and always yours. @oracle.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From artyomcool2 at gmail.com Tue Sep 2 10:09:37 2025 From: artyomcool2 at gmail.com (Artyom Drozdov) Date: Tue, 2 Sep 2025 12:09:37 +0200 Subject: Operator overloading for collections? In-Reply-To: References: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> Message-ID: Hello Amber team, Current discussion, and especially Brian's point that "If this is how enough people feel [...] then we have to seriously consider doing nothing." makes me join the discussion to defend the "doing nothing" position. As a constant reader of amber's mailing list I feel that the position of conservative part of the Java community is presented mostly by the team, not by the end users. But we are also here. We are just very happy already by "not having" some features. And operator overloading is one of them. In contrast to position "it is ok to have more runtime errors", I would like to say that I'm very happy with Java's way to encourage easy-to-read, eyes-driven-debuggable code with low amount of context required to understand, what's going on here and what I can and can not do with that code without breaking it. Most of the features that improve "writeability" reduce readability. Some of them were even accepted and implemented by project Amber. And probably it is ok to have some trade-off here. But operator's overloading, as for me, looks like overkill. It would be much harder to guess what's going on in very realistic scenarios, like using + operator with result of method invocation (directly or with "var" keyword). It would be harder to use IDE to navigate. And easier to break by misunderstanding precedence or automatic conversion rules (if something of that would take place). So I just want to add some voice to the "do nothing" counter. Thank you, Artyom Drozdov ??, 2 ????. 2025??. ? 11:13, Aaryn Tonita : > List is the free monoid over the type it collects so it would be quite > semantically well formed, the underlying method is also called add. But > lists are an interface in Java which really complicates things and > constructing a new list each time + is used instead of guiding users to the > add method feels like adding a footgun just for some free monoid nerdery. I > have similar things to say about the mathematics of Set. As a library > writer I would probably add it because I think it's cute, as a language > architect / standard library writer it feels like you have the right of it. > > I do hope you don't bind all operators together though, since the > existence of an operator is what separates semigroups from groups and > groups from rings and rings from fields... I need *, + and - over matrices, > and then I need * on a matrix with a scalar or a vector, but no division in > general. And left and right vector multiplication should differ. Then > vectors have + and -, a scalar * but no vector multiplication because > there's no canonical choice between tensor product and skew product and I > didn't want to implement a tensor library at this point (I have seen ^ used > for the skew product which matches the wedge symbol generally used but I > wouldn't trust it on precedence and I definitely don't want to overload > precedence so the semantics feel off). You can sort of implement division > over matrices, but there are similar good reasons not to. Lifting math > operators like sqrt and exp onto scalar fields will be fun though. > > Anyway, even if I have to throw a couple UnsupportedOperationExceptions > and get runtime errors instead of compiler errors, I will be quite happy. > Looking forward to a preview of this. > > > -------- Original Message -------- > On 9/2/25 00:14, Brian Goetz wrote: > > To shed some additional light on this, the argument here is not _only_ > philosophical. There is also a complexity and semantic argument against. > > Complexity. Supporting operators for things like > > + > > Is probably an order of magnitude more complexity than the proposal I > sketched at JVMLS, because this would also require the ability to declare > _new signatures_ for operators. The JLS currently defines the set of > operators, their precedence, and their associativity, as well as their > applicability. We treat + as a (T, T) -> T function, where T is one of > seven primitives that isn?t boolean. Adding _more_ types to that set is a > far simpler thing than also saying that the operator can have an arbitrary > function signature, such as `> (C, T) -> > C` (as would be needed for this example.). Now, we need syntax and type > system rules for how you would declare such operators, plus more complex > rules for resolving conflicts. And I suspect it wouldn?t be long before > someone started saying ?and we?ll need to control the precedence and > associativity to make it make sense? too. This is a MUCH bigger feature, > and its already big. For a goal that seems marginal, at best. > > Semantics. In the current model, we can tie the semantics of + to an > algebraic structure like a semigroup, which can even express constraints > across operators (such as the distributive rule.). This means that users > can understand the semantics (at least in part) of expressions like `a+b` > without regard for the type. But, if we merely treated operators as > ?methods with funny names? and allowed them to be arbitrarily defined, > including with asymmetric argument types, different return types, etc, then > no one knows what + means without looking it up. Seems a pretty bad trade > ? all the semantics for a little extra expressiveness. > > > > > > On Sep 1, 2025, at 5:58 PM, david Grajales > wrote: > > Thanks for the answer Brian. It's an understandable position. > > My best regards for all the java development team. > > > El lun, 1 sept 2025 a la(s) 3:04?p.m., Brian Goetz (brian.goetz at oracle.com) > escribi?: > >> This is an understandable question, and one that occurs almost >> immediately when someone says ?operator overloading.? >> >> And, there?s a very clear answer: this is a hard ?no?. Not now, not >> ever. >> >> Operators like `+` are not just ?methods with funny names and more >> complex priority and associativity rules? The reason that Java even has >> operators like + in the first place is that _numbers are special_, and that >> we want code that implements mathematical calculations to look somewhat >> like the math it is calculating. Numbers are so special, and so important, >> that a significant portion of foundational language design choices, such as >> ?what kinds of built-in types do we have?, ?what built-in operators do we >> support?, and ?what conversions between built-in types do we support? ? are >> all driven by numerical use cases. In earlier programming languages, such >> as Fortran, the language design process largely stopped here ? most of the >> language specification was about numbers and numeric operators. >> >> Similarly, the only reason we are _even considering_ operator overloading >> now is for the same reason ? to support _numerical_ calculations on the new >> numeric types enabled by Valhalla. (And even this is a potentially >> dangerous tradeoff.) >> >> It is a core Java value that ?reading code is more important than writing >> code?; all of this complexity was taken on so that reading _numerical_ code >> could look like the math we already understand. One does?t have to consult >> the Javadoc to know what `a+b` means when a and b are ints. We know this >> instinctively, because we have been looking at integer arithmetic using >> infix operators since we were children. But once you leave this very >> narrow realm, this often becomes something that benefits writing code more >> than reading code. And for every ?but this seems perfectly reasonable? >> example, there are many more ?please, kill me now? examples. >> >> For sure, the restrictions we set will be constraining, and they will >> eliminate some things that seem entirely reasonable. And surely bad >> programmers will move heaven and earth to ?outwit? the ?stupid? compiler >> (as they always have.) But the reason we are even doing this is: numbers >> are special. >> >> I realize this is not the answer you were hoping for. >> >> >> >> >> On Aug 21, 2025, at 7:01 PM, david Grajales >> wrote: >> >> Dear Amber team, >> >> I hope this message finds you well. >> >> I recently watched Brian?s talk "Growing the java language" and I >> found the discussion on operator overloading particularly interesting. I >> appreciated how the proposed approach allows operator overloading in a more >> restricted and safer way, by enforcing algebraic laws. >> >> This immediately made me think about potential use cases for >> collections?for example, using operators for union (+), difference (-), >> exclusion, and similar operations. However, since this feature is intended >> to be limited to value classes, and the current collection classes are not >> value-based, it seems they would not benefit from these new operator >> overloading capabilities. >> >> My question is: are there any plans to enhance the collections framework >> with value-class-based variants, so that they could take advantage of this >> feature? Or is this idea (or any other related to the use case) not >> currently under consideration? >> >> I know this is still under discussion, I am just curious about this >> particular use case. >> >> Thank you very much for your work and for your time. >> >> Best regards, and always yours. >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.1993grajales at gmail.com Tue Sep 2 12:21:07 2025 From: david.1993grajales at gmail.com (david Grajales) Date: Tue, 2 Sep 2025 07:21:07 -0500 Subject: Operator overloading for collections? In-Reply-To: References: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> Message-ID: Maybe doing nothing is an extreme position. Valhalla allows many numeric types that almost any other language has (like Complex numbers and linear equations ) and it would certainly be a shame if we must do. var c3 = Complex.addition(c1, c2); Instead of var c3 = 1-i2 + 2-i3. Or var c3 = c1 + c2 I know good vs perfect situations are the most ungrateful tho. El mar, 2 de sept de 2025, 5:09?a.m., Artyom Drozdov escribi?: > Hello Amber team, > > Current discussion, and especially Brian's point that "If this is how > enough people feel [...] then we have to seriously consider doing nothing." > makes me join the discussion to defend the "doing nothing" position. > As a constant reader of amber's mailing list I feel that the position of > conservative part of the Java community is presented mostly by the team, > not by the end users. But we are also here. We are just very happy already > by "not having" some features. And operator overloading is one of them. > > In contrast to position "it is ok to have more runtime errors", I would > like to say that I'm very happy with Java's way to encourage easy-to-read, > eyes-driven-debuggable code with low amount of context required to > understand, what's going on here and what I can and can not do with that > code without breaking it. > > Most of the features that improve "writeability" reduce readability. Some > of them were even accepted and implemented by project Amber. And probably > it is ok to have some trade-off here. But operator's overloading, as for > me, looks like overkill. It would be much harder to guess what's going on > in very realistic scenarios, like using + operator with result of method > invocation (directly or with "var" keyword). It would be harder to use IDE > to navigate. And easier to break by misunderstanding precedence or > automatic conversion rules (if something of that would take place). > > So I just want to add some voice to the "do nothing" counter. > > Thank you, > Artyom Drozdov > > ??, 2 ????. 2025??. ? 11:13, Aaryn Tonita : > >> List is the free monoid over the type it collects so it would be quite >> semantically well formed, the underlying method is also called add. But >> lists are an interface in Java which really complicates things and >> constructing a new list each time + is used instead of guiding users to the >> add method feels like adding a footgun just for some free monoid nerdery. I >> have similar things to say about the mathematics of Set. As a library >> writer I would probably add it because I think it's cute, as a language >> architect / standard library writer it feels like you have the right of it. >> >> I do hope you don't bind all operators together though, since the >> existence of an operator is what separates semigroups from groups and >> groups from rings and rings from fields... I need *, + and - over matrices, >> and then I need * on a matrix with a scalar or a vector, but no division in >> general. And left and right vector multiplication should differ. Then >> vectors have + and -, a scalar * but no vector multiplication because >> there's no canonical choice between tensor product and skew product and I >> didn't want to implement a tensor library at this point (I have seen ^ used >> for the skew product which matches the wedge symbol generally used but I >> wouldn't trust it on precedence and I definitely don't want to overload >> precedence so the semantics feel off). You can sort of implement division >> over matrices, but there are similar good reasons not to. Lifting math >> operators like sqrt and exp onto scalar fields will be fun though. >> >> Anyway, even if I have to throw a couple UnsupportedOperationExceptions >> and get runtime errors instead of compiler errors, I will be quite happy. >> Looking forward to a preview of this. >> >> >> -------- Original Message -------- >> On 9/2/25 00:14, Brian Goetz wrote: >> >> To shed some additional light on this, the argument here is not _only_ >> philosophical. There is also a complexity and semantic argument against. >> >> Complexity. Supporting operators for things like >> >> + >> >> Is probably an order of magnitude more complexity than the proposal I >> sketched at JVMLS, because this would also require the ability to declare >> _new signatures_ for operators. The JLS currently defines the set of >> operators, their precedence, and their associativity, as well as their >> applicability. We treat + as a (T, T) -> T function, where T is one of >> seven primitives that isn?t boolean. Adding _more_ types to that set is a >> far simpler thing than also saying that the operator can have an arbitrary >> function signature, such as `> (C, T) -> >> C` (as would be needed for this example.). Now, we need syntax and type >> system rules for how you would declare such operators, plus more complex >> rules for resolving conflicts. And I suspect it wouldn?t be long before >> someone started saying ?and we?ll need to control the precedence and >> associativity to make it make sense? too. This is a MUCH bigger feature, >> and its already big. For a goal that seems marginal, at best. >> >> Semantics. In the current model, we can tie the semantics of + to an >> algebraic structure like a semigroup, which can even express constraints >> across operators (such as the distributive rule.). This means that users >> can understand the semantics (at least in part) of expressions like `a+b` >> without regard for the type. But, if we merely treated operators as >> ?methods with funny names? and allowed them to be arbitrarily defined, >> including with asymmetric argument types, different return types, etc, then >> no one knows what + means without looking it up. Seems a pretty bad trade >> ? all the semantics for a little extra expressiveness. >> >> >> >> >> >> On Sep 1, 2025, at 5:58 PM, david Grajales >> wrote: >> >> Thanks for the answer Brian. It's an understandable position. >> >> My best regards for all the java development team. >> >> >> El lun, 1 sept 2025 a la(s) 3:04?p.m., Brian Goetz ( >> brian.goetz at oracle.com) escribi?: >> >>> This is an understandable question, and one that occurs almost >>> immediately when someone says ?operator overloading.? >>> >>> And, there?s a very clear answer: this is a hard ?no?. Not now, not >>> ever. >>> >>> Operators like `+` are not just ?methods with funny names and more >>> complex priority and associativity rules? The reason that Java even has >>> operators like + in the first place is that _numbers are special_, and that >>> we want code that implements mathematical calculations to look somewhat >>> like the math it is calculating. Numbers are so special, and so important, >>> that a significant portion of foundational language design choices, such as >>> ?what kinds of built-in types do we have?, ?what built-in operators do we >>> support?, and ?what conversions between built-in types do we support? ? are >>> all driven by numerical use cases. In earlier programming languages, such >>> as Fortran, the language design process largely stopped here ? most of the >>> language specification was about numbers and numeric operators. >>> >>> Similarly, the only reason we are _even considering_ operator >>> overloading now is for the same reason ? to support _numerical_ >>> calculations on the new numeric types enabled by Valhalla. (And even this >>> is a potentially dangerous tradeoff.) >>> >>> It is a core Java value that ?reading code is more important than >>> writing code?; all of this complexity was taken on so that reading >>> _numerical_ code could look like the math we already understand. One >>> does?t have to consult the Javadoc to know what `a+b` means when a and b >>> are ints. We know this instinctively, because we have been looking at >>> integer arithmetic using infix operators since we were children. But once >>> you leave this very narrow realm, this often becomes something that >>> benefits writing code more than reading code. And for every ?but this >>> seems perfectly reasonable? example, there are many more ?please, kill me >>> now? examples. >>> >>> For sure, the restrictions we set will be constraining, and they will >>> eliminate some things that seem entirely reasonable. And surely bad >>> programmers will move heaven and earth to ?outwit? the ?stupid? compiler >>> (as they always have.) But the reason we are even doing this is: numbers >>> are special. >>> >>> I realize this is not the answer you were hoping for. >>> >>> >>> >>> >>> On Aug 21, 2025, at 7:01 PM, david Grajales < >>> david.1993grajales at gmail.com> wrote: >>> >>> Dear Amber team, >>> >>> I hope this message finds you well. >>> >>> I recently watched Brian?s talk "Growing the java language" and I >>> found the discussion on operator overloading particularly interesting. I >>> appreciated how the proposed approach allows operator overloading in a more >>> restricted and safer way, by enforcing algebraic laws. >>> >>> This immediately made me think about potential use cases for >>> collections?for example, using operators for union (+), difference (-), >>> exclusion, and similar operations. However, since this feature is intended >>> to be limited to value classes, and the current collection classes are not >>> value-based, it seems they would not benefit from these new operator >>> overloading capabilities. >>> >>> My question is: are there any plans to enhance the collections framework >>> with value-class-based variants, so that they could take advantage of this >>> feature? Or is this idea (or any other related to the use case) not >>> currently under consideration? >>> >>> I know this is still under discussion, I am just curious about this >>> particular use case. >>> >>> Thank you very much for your work and for your time. >>> >>> Best regards, and always yours. >>> >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From atonita at proton.me Tue Sep 2 12:32:16 2025 From: atonita at proton.me (Aaryn Tonita) Date: Tue, 02 Sep 2025 12:32:16 +0000 Subject: Operator overloading for collections? In-Reply-To: References: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> Message-ID: > Most of the features that improve "writeability" reduce readability. Numeric code is special. The corresponding off by one error for a numerical algorithm (or even deriving the equation) is messing up the sign in a multi-line expression. Infix method chaining like p.add(rho).multiply(r.pow(3).multiply(4 * PI) ... ) is not easier to read for numerical code and precedence shoves whole terms inside methods in ugly ways. Reviewing such code is very difficult because of the essential complexity of the formula and the incidental complexity of the "FORmula TRANslation" obscures it when the translation is so far. Reviewing such code is more like a careful comparison with a spec than the logical review that most developers perform. Not every feature added to Java should be used by every Java developer. I really hope most developers never need to tweak garbage collector settings. It would be great if the libraries underlying machine learning (lapack and blas) could have competing pure Java implementations with reasonable user api's for the subset of developers that need them and want to remain in Java and interact with the Java ecosystem without polyglot pain. > It would be harder to use IDE to navigate. IDE's navigate to the implementation of the typeclass/method for operator overloading in languages that support it. It would be added to the Java support easily. -------- Original Message -------- On 9/2/25 12:09, Artyom Drozdov wrote: > Hello Amber team, > > Current discussion, and especially Brian's point that "If this is how enough people feel [...] then we have to seriously consider doing nothing." makes me join the discussion to defend the "doing nothing" position. > As a constant reader of amber's mailing list I feel that the position of conservative part of the Java community is presented mostly by the team, not by the end users. But we are also here. We are just very happy already by "not having" some features. And operator overloading is one of them. > > In contrast to position "it is ok to have more runtime errors", I would like to say that I'm very happy with Java's way to encourage easy-to-read, eyes-driven-debuggable code with low amount of context required to understand, what's going on here and what I can and can not do with that code without breaking it. > > Most of the features that improve "writeability" reduce readability. Some of them were even accepted and implemented by project Amber. And probably it is ok to have some trade-off here. But operator's overloading, as for me, looks like overkill. It would be much harder to guess what's going on in very realistic scenarios, like using + operator with result of method invocation (directly or with "var" keyword). It would be harder to use IDE to navigate. And easier to break by misunderstanding precedence or automatic conversion rules (if something of that would take place). > > So I just want to add some voice to the "do nothing" counter. > > Thank you, > Artyom Drozdov > > ??, 2 ????. 2025??. ? 11:13, Aaryn Tonita : > >> List is the free monoid over the type it collects so it would be quite semantically well formed, the underlying method is also called add. But lists are an interface in Java which really complicates things and constructing a new list each time + is used instead of guiding users to the add method feels like adding a footgun just for some free monoid nerdery. I have similar things to say about the mathematics of Set. As a library writer I would probably add it because I think it's cute, as a language architect / standard library writer it feels like you have the right of it. >> >> I do hope you don't bind all operators together though, since the existence of an operator is what separates semigroups from groups and groups from rings and rings from fields... I need *, + and - over matrices, and then I need * on a matrix with a scalar or a vector, but no division in general. And left and right vector multiplication should differ. Then vectors have + and -, a scalar * but no vector multiplication because there's no canonical choice between tensor product and skew product and I didn't want to implement a tensor library at this point (I have seen ^ used for the skew product which matches the wedge symbol generally used but I wouldn't trust it on precedence and I definitely don't want to overload precedence so the semantics feel off). You can sort of implement division over matrices, but there are similar good reasons not to. Lifting math operators like sqrt and exp onto scalar fields will be fun though. >> >> Anyway, even if I have to throw a couple UnsupportedOperationExceptions and get runtime errors instead of compiler errors, I will be quite happy. Looking forward to a preview of this. >> >> -------- Original Message -------- >> On 9/2/25 00:14, Brian Goetz wrote: >> >>> To shed some additional light on this, the argument here is not _only_ philosophical. There is also a complexity and semantic argument against. >>> >>> Complexity. Supporting operators for things like >>> >>> + >>> >>> Is probably an order of magnitude more complexity than the proposal I sketched at JVMLS, because this would also require the ability to declare _new signatures_ for operators. The JLS currently defines the set of operators, their precedence, and their associativity, as well as their applicability. We treat + as a (T, T) -> T function, where T is one of seven primitives that isn?t boolean. Adding _more_ types to that set is a far simpler thing than also saying that the operator can have an arbitrary function signature, such as `> (C, T) -> C` (as would be needed for this example.). Now, we need syntax and type system rules for how you would declare such operators, plus more complex rules for resolving conflicts. And I suspect it wouldn?t be long before someone started saying ?and we?ll need to control the precedence and associativity to make it make sense? too. This is a MUCH bigger feature, and its already big. For a goal that seems marginal, at best. >>> >>> Semantics. In the current model, we can tie the semantics of + to an algebraic structure like a semigroup, which can even express constraints across operators (such as the distributive rule.). This means that users can understand the semantics (at least in part) of expressions like `a+b` without regard for the type. But, if we merely treated operators as ?methods with funny names? and allowed them to be arbitrarily defined, including with asymmetric argument types, different return types, etc, then no one knows what + means without looking it up. Seems a pretty bad trade ? all the semantics for a little extra expressiveness. >>> >>>> On Sep 1, 2025, at 5:58 PM, david Grajales wrote: >>>> >>>> Thanks for the answer Brian. It's an understandable position. >>>> >>>> My best regards for all the java development team. >>>> >>>> El lun, 1 sept 2025 a la(s) 3:04?p.m., Brian Goetz (brian.goetz at oracle.com) escribi?: >>>> >>>>> This is an understandable question, and one that occurs almost immediately when someone says ?operator overloading.? >>>>> >>>>> And, there?s a very clear answer: this is a hard ?no?. Not now, not ever. >>>>> >>>>> Operators like `+` are not just ?methods with funny names and more complex priority and associativity rules? The reason that Java even has operators like + in the first place is that _numbers are special_, and that we want code that implements mathematical calculations to look somewhat like the math it is calculating. Numbers are so special, and so important, that a significant portion of foundational language design choices, such as ?what kinds of built-in types do we have?, ?what built-in operators do we support?, and ?what conversions between built-in types do we support? ? are all driven by numerical use cases. In earlier programming languages, such as Fortran, the language design process largely stopped here ? most of the language specification was about numbers and numeric operators. >>>>> >>>>> Similarly, the only reason we are _even considering_ operator overloading now is for the same reason ? to support _numerical_ calculations on the new numeric types enabled by Valhalla. (And even this is a potentially dangerous tradeoff.) >>>>> >>>>> It is a core Java value that ?reading code is more important than writing code?; all of this complexity was taken on so that reading _numerical_ code could look like the math we already understand. One does?t have to consult the Javadoc to know what `a+b` means when a and b are ints. We know this instinctively, because we have been looking at integer arithmetic using infix operators since we were children. But once you leave this very narrow realm, this often becomes something that benefits writing code more than reading code. And for every ?but this seems perfectly reasonable? example, there are many more ?please, kill me now? examples. >>>>> >>>>> For sure, the restrictions we set will be constraining, and they will eliminate some things that seem entirely reasonable. And surely bad programmers will move heaven and earth to ?outwit? the ?stupid? compiler (as they always have.) But the reason we are even doing this is: numbers are special. >>>>> >>>>> I realize this is not the answer you were hoping for. >>>>> >>>>>> On Aug 21, 2025, at 7:01 PM, david Grajales wrote: >>>>>> >>>>>> Dear Amber team, >>>>>> >>>>>> I hope this message finds you well. >>>>>> >>>>>> I recently watched Brian?s talk "Growing the java language" and I found the discussion on operator overloading particularly interesting. I appreciated how the proposed approach allows operator overloading in a more restricted and safer way, by enforcing algebraic laws. >>>>>> >>>>>> This immediately made me think about potential use cases for collections?for example, using operators for union (+), difference (-), exclusion, and similar operations. However, since this feature is intended to be limited to value classes, and the current collection classes are not value-based, it seems they would not benefit from these new operator overloading capabilities. >>>>>> >>>>>> My question is: are there any plans to enhance the collections framework with value-class-based variants, so that they could take advantage of this feature? Or is this idea (or any other related to the use case) not currently under consideration? >>>>>> >>>>>> I know this is still under discussion, I am just curious about this particular use case. >>>>>> >>>>>> Thank you very much for your work and for your time. >>>>>> >>>>>> Best regards, and always yours. @gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From artyomcool2 at gmail.com Tue Sep 2 12:49:47 2025 From: artyomcool2 at gmail.com (Artyom Drozdov) Date: Tue, 2 Sep 2025 14:49:47 +0200 Subject: Operator overloading for collections? In-Reply-To: References: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> Message-ID: > Numeric code is special. Yes, it is. That's why it feels bad to me to give users to much control on it. Comparing "how easy to write good code" with "how easy to prevent appearance of bad code", for me the second one is more important. > Not every feature added to Java should be used by every Java developer. Unfortunately, we can't prevent users from using "dangerous" features. The story of Unsafe proves that. We can't enforce rule "only certified developers can overload operators". > It would be added to the Java support easily. It is not about support. Navigate through methods easier from the UX perspective both with keyboard and mouse/touchpad, comparing with single-symbols operators. It's also goes against java users experience with operators, that just don't need to be examined. ??, 2 ????. 2025??., 14:32 Aaryn Tonita : > > Most of the features that improve "writeability" reduce readability. > > Numeric code is special. The corresponding off by one error for a > numerical algorithm (or even deriving the equation) is messing up the sign > in a multi-line expression. Infix method chaining like > p.add(rho).multiply(r.pow(3).multiply(4 * PI) ... ) is not easier to read > for numerical code and precedence shoves whole terms inside methods in ugly > ways. Reviewing such code is very difficult because of the essential > complexity of the formula and the incidental complexity of the "FORmula > TRANslation" obscures it when the translation is so far. Reviewing such > code is more like a careful comparison with a spec than the logical review > that most developers perform. > > Not every feature added to Java should be used by every Java developer. I > really hope most developers never need to tweak garbage collector settings. > It would be great if the libraries underlying machine learning (lapack and > blas) could have competing pure Java implementations with reasonable user > api's for the subset of developers that need them and want to remain in > Java and interact with the Java ecosystem without polyglot pain. > > > It would be harder to use IDE to navigate. > > IDE's navigate to the implementation of the typeclass/method for operator > overloading in languages that support it. It would be added to the Java > support easily. > > > -------- Original Message -------- > On 9/2/25 12:09, Artyom Drozdov wrote: > > Hello Amber team, > > Current discussion, and especially Brian's point that "If this is how > enough people feel [...] then we have to seriously consider doing nothing." > makes me join the discussion to defend the "doing nothing" position. > As a constant reader of amber's mailing list I feel that the position of > conservative part of the Java community is presented mostly by the team, > not by the end users. But we are also here. We are just very happy already > by "not having" some features. And operator overloading is one of them. > > In contrast to position "it is ok to have more runtime errors", I would > like to say that I'm very happy with Java's way to encourage easy-to-read, > eyes-driven-debuggable code with low amount of context required to > understand, what's going on here and what I can and can not do with that > code without breaking it. > > Most of the features that improve "writeability" reduce readability. Some > of them were even accepted and implemented by project Amber. And probably > it is ok to have some trade-off here. But operator's overloading, as for > me, looks like overkill. It would be much harder to guess what's going on > in very realistic scenarios, like using + operator with result of method > invocation (directly or with "var" keyword). It would be harder to use IDE > to navigate. And easier to break by misunderstanding precedence or > automatic conversion rules (if something of that would take place). > > So I just want to add some voice to the "do nothing" counter. > > Thank you, > Artyom Drozdov > > ??, 2 ????. 2025??. ? 11:13, Aaryn Tonita : > >> List is the free monoid over the type it collects so it would be quite >> semantically well formed, the underlying method is also called add. But >> lists are an interface in Java which really complicates things and >> constructing a new list each time + is used instead of guiding users to the >> add method feels like adding a footgun just for some free monoid nerdery. I >> have similar things to say about the mathematics of Set. As a library >> writer I would probably add it because I think it's cute, as a language >> architect / standard library writer it feels like you have the right of it. >> >> I do hope you don't bind all operators together though, since the >> existence of an operator is what separates semigroups from groups and >> groups from rings and rings from fields... I need *, + and - over matrices, >> and then I need * on a matrix with a scalar or a vector, but no division in >> general. And left and right vector multiplication should differ. Then >> vectors have + and -, a scalar * but no vector multiplication because >> there's no canonical choice between tensor product and skew product and I >> didn't want to implement a tensor library at this point (I have seen ^ used >> for the skew product which matches the wedge symbol generally used but I >> wouldn't trust it on precedence and I definitely don't want to overload >> precedence so the semantics feel off). You can sort of implement division >> over matrices, but there are similar good reasons not to. Lifting math >> operators like sqrt and exp onto scalar fields will be fun though. >> >> Anyway, even if I have to throw a couple UnsupportedOperationExceptions >> and get runtime errors instead of compiler errors, I will be quite happy. >> Looking forward to a preview of this. >> >> >> -------- Original Message -------- >> On 9/2/25 00:14, Brian Goetz wrote: >> >> To shed some additional light on this, the argument here is not _only_ >> philosophical. There is also a complexity and semantic argument against. >> >> Complexity. Supporting operators for things like >> >> + >> >> Is probably an order of magnitude more complexity than the proposal I >> sketched at JVMLS, because this would also require the ability to declare >> _new signatures_ for operators. The JLS currently defines the set of >> operators, their precedence, and their associativity, as well as their >> applicability. We treat + as a (T, T) -> T function, where T is one of >> seven primitives that isn?t boolean. Adding _more_ types to that set is a >> far simpler thing than also saying that the operator can have an arbitrary >> function signature, such as `> (C, T) -> >> C` (as would be needed for this example.). Now, we need syntax and type >> system rules for how you would declare such operators, plus more complex >> rules for resolving conflicts. And I suspect it wouldn?t be long before >> someone started saying ?and we?ll need to control the precedence and >> associativity to make it make sense? too. This is a MUCH bigger feature, >> and its already big. For a goal that seems marginal, at best. >> >> Semantics. In the current model, we can tie the semantics of + to an >> algebraic structure like a semigroup, which can even express constraints >> across operators (such as the distributive rule.). This means that users >> can understand the semantics (at least in part) of expressions like `a+b` >> without regard for the type. But, if we merely treated operators as >> ?methods with funny names? and allowed them to be arbitrarily defined, >> including with asymmetric argument types, different return types, etc, then >> no one knows what + means without looking it up. Seems a pretty bad trade >> ? all the semantics for a little extra expressiveness. >> >> >> >> >> >> On Sep 1, 2025, at 5:58 PM, david Grajales >> wrote: >> >> Thanks for the answer Brian. It's an understandable position. >> >> My best regards for all the java development team. >> >> >> El lun, 1 sept 2025 a la(s) 3:04?p.m., Brian Goetz ( >> brian.goetz at oracle.com) escribi?: >> >>> This is an understandable question, and one that occurs almost >>> immediately when someone says ?operator overloading.? >>> >>> And, there?s a very clear answer: this is a hard ?no?. Not now, not >>> ever. >>> >>> Operators like `+` are not just ?methods with funny names and more >>> complex priority and associativity rules? The reason that Java even has >>> operators like + in the first place is that _numbers are special_, and that >>> we want code that implements mathematical calculations to look somewhat >>> like the math it is calculating. Numbers are so special, and so important, >>> that a significant portion of foundational language design choices, such as >>> ?what kinds of built-in types do we have?, ?what built-in operators do we >>> support?, and ?what conversions between built-in types do we support? ? are >>> all driven by numerical use cases. In earlier programming languages, such >>> as Fortran, the language design process largely stopped here ? most of the >>> language specification was about numbers and numeric operators. >>> >>> Similarly, the only reason we are _even considering_ operator >>> overloading now is for the same reason ? to support _numerical_ >>> calculations on the new numeric types enabled by Valhalla. (And even this >>> is a potentially dangerous tradeoff.) >>> >>> It is a core Java value that ?reading code is more important than >>> writing code?; all of this complexity was taken on so that reading >>> _numerical_ code could look like the math we already understand. One >>> does?t have to consult the Javadoc to know what `a+b` means when a and b >>> are ints. We know this instinctively, because we have been looking at >>> integer arithmetic using infix operators since we were children. But once >>> you leave this very narrow realm, this often becomes something that >>> benefits writing code more than reading code. And for every ?but this >>> seems perfectly reasonable? example, there are many more ?please, kill me >>> now? examples. >>> >>> For sure, the restrictions we set will be constraining, and they will >>> eliminate some things that seem entirely reasonable. And surely bad >>> programmers will move heaven and earth to ?outwit? the ?stupid? compiler >>> (as they always have.) But the reason we are even doing this is: numbers >>> are special. >>> >>> I realize this is not the answer you were hoping for. >>> >>> >>> >>> >>> On Aug 21, 2025, at 7:01 PM, david Grajales < >>> david.1993grajales at gmail.com> wrote: >>> >>> Dear Amber team, >>> >>> I hope this message finds you well. >>> >>> I recently watched Brian?s talk "Growing the java language" and I >>> found the discussion on operator overloading particularly interesting. I >>> appreciated how the proposed approach allows operator overloading in a more >>> restricted and safer way, by enforcing algebraic laws. >>> >>> This immediately made me think about potential use cases for >>> collections?for example, using operators for union (+), difference (-), >>> exclusion, and similar operations. However, since this feature is intended >>> to be limited to value classes, and the current collection classes are not >>> value-based, it seems they would not benefit from these new operator >>> overloading capabilities. >>> >>> My question is: are there any plans to enhance the collections framework >>> with value-class-based variants, so that they could take advantage of this >>> feature? Or is this idea (or any other related to the use case) not >>> currently under consideration? >>> >>> I know this is still under discussion, I am just curious about this >>> particular use case. >>> >>> Thank you very much for your work and for your time. >>> >>> Best regards, and always yours. >>> >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From redio.development at gmail.com Tue Sep 2 15:34:52 2025 From: redio.development at gmail.com (Red IO) Date: Tue, 2 Sep 2025 17:34:52 +0200 Subject: Operator overloading for collections? In-Reply-To: References: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> Message-ID: I think we are currently drifting away from "operators for collections"/specific java standard classes to "custom operator overloading for everyone" the later being a clear non goal everyone can agree on. That an operator shouldn't be required to be looked into is clear. On Tue, Sep 2, 2025, 14:50 Artyom Drozdov wrote: > > Numeric code is special. > > Yes, it is. That's why it feels bad to me to give users to much control on > it. Comparing "how easy to write good code" with "how easy to prevent > appearance of bad code", for me the second one is more important. > > > Not every feature added to Java should be used by every Java developer. > > Unfortunately, we can't prevent users from using "dangerous" features. The > story of Unsafe proves that. We can't enforce rule "only certified > developers can overload operators". > > > It would be added to the Java support easily. > > It is not about support. Navigate through methods easier from the UX > perspective both with keyboard and mouse/touchpad, comparing with > single-symbols operators. It's also goes against java users experience with > operators, that just don't need to be examined. > > > ??, 2 ????. 2025??., 14:32 Aaryn Tonita : > >> > Most of the features that improve "writeability" reduce readability. >> >> Numeric code is special. The corresponding off by one error for a >> numerical algorithm (or even deriving the equation) is messing up the sign >> in a multi-line expression. Infix method chaining like >> p.add(rho).multiply(r.pow(3).multiply(4 * PI) ... ) is not easier to read >> for numerical code and precedence shoves whole terms inside methods in ugly >> ways. Reviewing such code is very difficult because of the essential >> complexity of the formula and the incidental complexity of the "FORmula >> TRANslation" obscures it when the translation is so far. Reviewing such >> code is more like a careful comparison with a spec than the logical review >> that most developers perform. >> >> Not every feature added to Java should be used by every Java developer. I >> really hope most developers never need to tweak garbage collector settings. >> It would be great if the libraries underlying machine learning (lapack and >> blas) could have competing pure Java implementations with reasonable user >> api's for the subset of developers that need them and want to remain in >> Java and interact with the Java ecosystem without polyglot pain. >> >> > It would be harder to use IDE to navigate. >> >> IDE's navigate to the implementation of the typeclass/method for operator >> overloading in languages that support it. It would be added to the Java >> support easily. >> >> >> -------- Original Message -------- >> On 9/2/25 12:09, Artyom Drozdov wrote: >> >> Hello Amber team, >> >> Current discussion, and especially Brian's point that "If this is how >> enough people feel [...] then we have to seriously consider doing nothing." >> makes me join the discussion to defend the "doing nothing" position. >> As a constant reader of amber's mailing list I feel that the position of >> conservative part of the Java community is presented mostly by the team, >> not by the end users. But we are also here. We are just very happy already >> by "not having" some features. And operator overloading is one of them. >> >> In contrast to position "it is ok to have more runtime errors", I would >> like to say that I'm very happy with Java's way to encourage easy-to-read, >> eyes-driven-debuggable code with low amount of context required to >> understand, what's going on here and what I can and can not do with that >> code without breaking it. >> >> Most of the features that improve "writeability" reduce readability. Some >> of them were even accepted and implemented by project Amber. And probably >> it is ok to have some trade-off here. But operator's overloading, as for >> me, looks like overkill. It would be much harder to guess what's going on >> in very realistic scenarios, like using + operator with result of method >> invocation (directly or with "var" keyword). It would be harder to use IDE >> to navigate. And easier to break by misunderstanding precedence or >> automatic conversion rules (if something of that would take place). >> >> So I just want to add some voice to the "do nothing" counter. >> >> Thank you, >> Artyom Drozdov >> >> ??, 2 ????. 2025??. ? 11:13, Aaryn Tonita : >> >>> List is the free monoid over the type it collects so it would be quite >>> semantically well formed, the underlying method is also called add. But >>> lists are an interface in Java which really complicates things and >>> constructing a new list each time + is used instead of guiding users to the >>> add method feels like adding a footgun just for some free monoid nerdery. I >>> have similar things to say about the mathematics of Set. As a library >>> writer I would probably add it because I think it's cute, as a language >>> architect / standard library writer it feels like you have the right of it. >>> >>> I do hope you don't bind all operators together though, since the >>> existence of an operator is what separates semigroups from groups and >>> groups from rings and rings from fields... I need *, + and - over matrices, >>> and then I need * on a matrix with a scalar or a vector, but no division in >>> general. And left and right vector multiplication should differ. Then >>> vectors have + and -, a scalar * but no vector multiplication because >>> there's no canonical choice between tensor product and skew product and I >>> didn't want to implement a tensor library at this point (I have seen ^ used >>> for the skew product which matches the wedge symbol generally used but I >>> wouldn't trust it on precedence and I definitely don't want to overload >>> precedence so the semantics feel off). You can sort of implement division >>> over matrices, but there are similar good reasons not to. Lifting math >>> operators like sqrt and exp onto scalar fields will be fun though. >>> >>> Anyway, even if I have to throw a couple UnsupportedOperationExceptions >>> and get runtime errors instead of compiler errors, I will be quite happy. >>> Looking forward to a preview of this. >>> >>> >>> -------- Original Message -------- >>> On 9/2/25 00:14, Brian Goetz wrote: >>> >>> To shed some additional light on this, the argument here is not _only_ >>> philosophical. There is also a complexity and semantic argument against. >>> >>> Complexity. Supporting operators for things like >>> >>> + >>> >>> Is probably an order of magnitude more complexity than the proposal I >>> sketched at JVMLS, because this would also require the ability to declare >>> _new signatures_ for operators. The JLS currently defines the set of >>> operators, their precedence, and their associativity, as well as their >>> applicability. We treat + as a (T, T) -> T function, where T is one of >>> seven primitives that isn?t boolean. Adding _more_ types to that set is a >>> far simpler thing than also saying that the operator can have an arbitrary >>> function signature, such as `> (C, T) -> >>> C` (as would be needed for this example.). Now, we need syntax and type >>> system rules for how you would declare such operators, plus more complex >>> rules for resolving conflicts. And I suspect it wouldn?t be long before >>> someone started saying ?and we?ll need to control the precedence and >>> associativity to make it make sense? too. This is a MUCH bigger feature, >>> and its already big. For a goal that seems marginal, at best. >>> >>> Semantics. In the current model, we can tie the semantics of + to an >>> algebraic structure like a semigroup, which can even express constraints >>> across operators (such as the distributive rule.). This means that users >>> can understand the semantics (at least in part) of expressions like `a+b` >>> without regard for the type. But, if we merely treated operators as >>> ?methods with funny names? and allowed them to be arbitrarily defined, >>> including with asymmetric argument types, different return types, etc, then >>> no one knows what + means without looking it up. Seems a pretty bad trade >>> ? all the semantics for a little extra expressiveness. >>> >>> >>> >>> >>> >>> On Sep 1, 2025, at 5:58 PM, david Grajales >>> wrote: >>> >>> Thanks for the answer Brian. It's an understandable position. >>> >>> My best regards for all the java development team. >>> >>> >>> El lun, 1 sept 2025 a la(s) 3:04?p.m., Brian Goetz ( >>> brian.goetz at oracle.com) escribi?: >>> >>>> This is an understandable question, and one that occurs almost >>>> immediately when someone says ?operator overloading.? >>>> >>>> And, there?s a very clear answer: this is a hard ?no?. Not now, not >>>> ever. >>>> >>>> Operators like `+` are not just ?methods with funny names and more >>>> complex priority and associativity rules? The reason that Java even has >>>> operators like + in the first place is that _numbers are special_, and that >>>> we want code that implements mathematical calculations to look somewhat >>>> like the math it is calculating. Numbers are so special, and so important, >>>> that a significant portion of foundational language design choices, such as >>>> ?what kinds of built-in types do we have?, ?what built-in operators do we >>>> support?, and ?what conversions between built-in types do we support? ? are >>>> all driven by numerical use cases. In earlier programming languages, such >>>> as Fortran, the language design process largely stopped here ? most of the >>>> language specification was about numbers and numeric operators. >>>> >>>> Similarly, the only reason we are _even considering_ operator >>>> overloading now is for the same reason ? to support _numerical_ >>>> calculations on the new numeric types enabled by Valhalla. (And even this >>>> is a potentially dangerous tradeoff.) >>>> >>>> It is a core Java value that ?reading code is more important than >>>> writing code?; all of this complexity was taken on so that reading >>>> _numerical_ code could look like the math we already understand. One >>>> does?t have to consult the Javadoc to know what `a+b` means when a and b >>>> are ints. We know this instinctively, because we have been looking at >>>> integer arithmetic using infix operators since we were children. But once >>>> you leave this very narrow realm, this often becomes something that >>>> benefits writing code more than reading code. And for every ?but this >>>> seems perfectly reasonable? example, there are many more ?please, kill me >>>> now? examples. >>>> >>>> For sure, the restrictions we set will be constraining, and they will >>>> eliminate some things that seem entirely reasonable. And surely bad >>>> programmers will move heaven and earth to ?outwit? the ?stupid? compiler >>>> (as they always have.) But the reason we are even doing this is: numbers >>>> are special. >>>> >>>> I realize this is not the answer you were hoping for. >>>> >>>> >>>> >>>> >>>> On Aug 21, 2025, at 7:01 PM, david Grajales < >>>> david.1993grajales at gmail.com> wrote: >>>> >>>> Dear Amber team, >>>> >>>> I hope this message finds you well. >>>> >>>> I recently watched Brian?s talk "Growing the java language" and I >>>> found the discussion on operator overloading particularly interesting. I >>>> appreciated how the proposed approach allows operator overloading in a more >>>> restricted and safer way, by enforcing algebraic laws. >>>> >>>> This immediately made me think about potential use cases for >>>> collections?for example, using operators for union (+), difference (-), >>>> exclusion, and similar operations. However, since this feature is intended >>>> to be limited to value classes, and the current collection classes are not >>>> value-based, it seems they would not benefit from these new operator >>>> overloading capabilities. >>>> >>>> My question is: are there any plans to enhance the collections >>>> framework with value-class-based variants, so that they could take >>>> advantage of this feature? Or is this idea (or any other related to the use >>>> case) not currently under consideration? >>>> >>>> I know this is still under discussion, I am just curious about this >>>> particular use case. >>>> >>>> Thank you very much for your work and for your time. >>>> >>>> Best regards, and always yours. >>>> >>>> >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Tue Sep 2 15:52:53 2025 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Tue, 2 Sep 2025 10:52:53 -0500 Subject: Operator overloading for collections? In-Reply-To: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> References: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> Message-ID: On Mon, Sep 1, 2025 at 5:13?PM Brian Goetz wrote: > To shed some additional light on this, the argument here is not _only_ > philosophical. There is also a complexity and semantic argument against. > Personally I'm happy with the way things are but this is also an interesting discussion. The "complexity" and "semantic" counter-arguments are compelling but I'm still missing something. It seems like those counter-arguments would be (mostly) addressed by making the new operator feature merely "syntactic sugar", like for-each loops and try-with-resources. To give a concrete example, consider this proposal (this is just for discussion! I am seeking understanding/clarification, not agreement)... Consider an expression like a + b. Let A be the compile-time type of A, and suppose A is not a primitive class. Then a + b would compile if and only if the expression a.operatorPlus(b) would compile - and it would have the exact same meaning. Similarly, a + b * c << d would just be syntactic sugar for a.operatorPlus(b.operatorTimes(c)).operatorLeftShift(d), etc. The rules for precedence and evaluation order stay the same. In other words, we declare that a + b and a.operatorPlus(b) are just two different spellings of the same abstract concept (if it exists, whatever it is)... so no need to differentiate between them. It's just a new syntax for the same thing - if such a thing is defined. Re: Complexity: - Operator associativity and precedence does not change. - "Ability to declare new signatures" - use method overloading to declare multiple operatorPlus() variants taking different argument types - "Applicability" and "resolving conflicts" - handled using the normal method selection process. - "...it wouldn?t be long before someone started saying 'and we?ll need to control the precedence and associativity to make it make sense'" - Hard no! Re: Semantics: - "no one knows what + means without looking it up" - isn't that already the case today? - To understand a.foo(b) you must understand method A.foo(b) - To understand a + b you must understand method A.operatorPlus(b) The new methods allow arbitrary user code to participate, so they "grow the language" in an open way. I'm not saying there aren't still OTHER reasons not to add such a feature, I'm just trying to narrow down what they are. (Also, the question of whether, given this new feature, we would retrofit existing collection classes is a completely separate (and possibly more contentious) discussion.) Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From pedro.lamarao at prodist.com.br Tue Sep 2 17:24:32 2025 From: pedro.lamarao at prodist.com.br (=?UTF-8?Q?Pedro_Lamar=C3=A3o?=) Date: Tue, 2 Sep 2025 14:24:32 -0300 Subject: Operator overloading for collections? In-Reply-To: References: <17882354-E976-4861-A683-3FD033AB4040@oracle.com> Message-ID: Em ter., 2 de set. de 2025 ?s 12:54, Archie Cobbs escreveu: > Re: Semantics: > - "no one knows what + means without looking it up" - isn't that already > the case today? > This reminds of an old criticism of C++, that one does not "know without looking up" the meaning of local variable declarations, that constructor functions are "magic". The criticism usually went like this: in C you know what is the effect of a local variable declaration, you know it allocates space for a struct with Foo layout, but in C++ you don't know what is the effect of a variable declaration, because it may run a constructor function, which may do anything it likes -- and this not knowledge is a problem because it is "magic". I learned to program bare metal directly in C++ without going through C first -- I learned and started practising both C and C++ at approximately the same time. The above criticism never made any sense to me: the existence of constructor functions were part of my early training and the need to look up the effects of constructors were part of my habit while reading foreign code. The fact that I needed to look up an initializer line or a function call line was just the same. I think this is a type of problem much less important then, say, the short circuit problem -- that in C++ primitive operator&& short circuits but overloaded operator&& does not. One could apply the same kind of observation -- that one must lookup the meaning of code -- but to me knowing if and in what order a machine evaluates subexpressions is a very different kind of thing than knowing what is the definition of an operator. I think one must know, just by looking at code, if and in what order a machine evaluates subexpressions, without knowing what the actual definition of each subexpression; operator && must either always short circuit or never short circuit. -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at ethx.net Tue Sep 2 21:51:32 2025 From: steve at ethx.net (Steve Barham) Date: Tue, 2 Sep 2025 22:51:32 +0100 Subject: Operator overloading for collections? Message-ID: An HTML attachment was scrubbed... URL: From maurizio.cimadamore at oracle.com Tue Sep 2 23:14:51 2025 From: maurizio.cimadamore at oracle.com (Maurizio Cimadamore) Date: Wed, 3 Sep 2025 00:14:51 +0100 Subject: Fwd: Feedback about StableValues(Preview) In-Reply-To: References: Message-ID: <693ec446-403b-4e3f-befa-d1f7fe4ab682@oracle.com> Hi David, Thanks for your feedback. The factories that we provide, like StableValue::list and StableValue::map cover common cases where the list/map elements are known upfront, but should be computed lazily. In other words you might think of these factories as lazy variants of List::of and Map::of. Both kinds of factories return an unmodifiable collection -- that is a collection whose size is fixed, and that rejects update operations. I understand that you would like to create a "stable" map, whose key/values are not known upfront -- more specifically, where the keys are only known dynamically. I believe in these cases the "win" for using a stable map in the first place is much less obvious. If the map can grow dynamically (e.g. because you don't know how many entries you might be adding to it) you are probably looking at an implementation that has some way to "resize" itself -- which makes using something like a stable construct much harder. For instance, adding new entries on a map might cause the underlying array of buckets to be reallocated, and existing entries to be rehashed in the new (larger) bucket array. This means that the bucket array itself will need to be updated several times during the lifecycle of the map, making it not stable (remember: stable means "updated at most once"). If some constraints are relaxed, e.g. maybe you know how many entries you are going to add in your map -- that might make the problem easier, as now we're back in a situation where we now the size of the underlying storage. For instance one can have a specialized hash table implementation backed by a linear array (of an appropriate size), and then use linear probing to store entries in the linear array. Since the size is bounded, the size of the entries linear array is also bounded, and we can then make that linear array stable (e.g. use a stable list). Since such a "fixed size" hash map would be quite specialized, we did not see yet enough motivation for adding it to the JDK -- especially given that developers should be able to define such constructs on top of the StableValue API (in fact _all_ the existing provided factories are defined in terms of the Stable Value API). But it's not just about whether the size is known or not -- in order for the JVM to be able to apply any sort of constant-folding optimization to the map access, you need the key to be a constant (e.g. either some compile-time constant, or the value of a static final field, or the contents of some other stable value). Only then we can fold the entire map access expression (if we're lucky). But in the example you provide, the key provided to Map::get is just a random class name you get from the current stack. So there's very little for the JIT to optimize here. If the input (the key) is not known, then the access expression (Map::get) cannot be optimized. In other words, the case of a fully dynamic list/map that you propose just doesn't seem a great fit for stability, in the sense that you likely won't get any performance improvement (over a concurrent hash map) by using some sort of stable map there. Maurizio On 02/09/2025 02:45, david Grajales wrote: > > > ---------- Forwarded message --------- > De: *david Grajales* > Date: lun, 1 sept 2025 a la(s) 8:43?p.m. > Subject: Feedback about StableValues(Preview) > To: > > > Subject: Feedback and Questions on JEP 8359894 - Stable Values API > > Dear Java core-libs development team, > > Please accept my sincere gratitude and compliments for your ongoing > dedication to improving the Java platform. The continuous innovation > and thoughtful evolution of Java is truly appreciated by the developer > community. > > I have been experimenting with the Stable Values API (JEP 8359894) in > a development branch of a service at my company, and I would like to > share some observations and seek your guidance on a particular use case. > > > Current Implementation > > Currently, I have a logging utility that follows a standard pattern > for lazy value computation: > > > class DbLogUtility { > ? ? private static final ConcurrentMap loggerCache = > new ConcurrentHashMap<>(); > > ? ? private DbLogUtility(){} > > ? ? private static Logger getLogger() { > ? ? ? ? var className = > Thread.currentThread().getStackTrace()[3].getClassName(); > ? ? ? ? return loggerCache.computeIfAbsent(className, > LoggerFactory::getLogger); > ? ? } > ? ? public static void logError(){ > ? ? ? ? //.... implementation detail > ? ? } > } > > > Challenge with Stable Values API > > When attempting to migrate this code to use the Stable Values API, I > encountered a fundamental limitation: the API requires keys to be > known at compile time. The current factory methods > (|StableValue.function(Set, Function)| and > |StableValue.intFunction(int, IntFunction)|) expect predefined key > sets or bounded integer ranges. > > This design constraint makes it challenging to handle dynamic key > discovery scenarios, which are quite common in enterprise applications > for: > > * Logger caching by dynamically discovered class names > * Configuration caching by runtime-determined keys > * Resource pooling with dynamic identifiers > * Etc. > > > Questions and Feedback > > 1. *Am I missing an intended usage pattern?* Is there a recommended > approach within the current API design for handling dynamic key > discovery while maintaining the performance benefits of stable values? > 2. ?Would you consider any of these potential enhancements: > * Integration of stable value optimizations directly into > existing collection APIs (similar to how some methods have > been added to List and Map interfaces for better discoverability) > * A hybrid approach that provides stable value benefits for > dynamically discovered keys > 3. Do you envision the Stable Values API as primarily serving > compile-time-known scenarios, with dynamic use cases continuing to > rely on traditional concurrent collections? > > Thank you for your time and consideration. I would be grateful for any > guidance or clarification you might provide on these questions. If > there are planned enhancements or alternative patterns I should > consider, I would very much appreciate your insights. > > Best regards, and always yours. > > David Grajales C?rdenas. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarma.swaranga at gmail.com Tue Sep 2 23:40:26 2025 From: sarma.swaranga at gmail.com (Swaranga Sarma) Date: Tue, 2 Sep 2025 16:40:26 -0700 Subject: Fwd: Feedback about StableValues(Preview) In-Reply-To: <693ec446-403b-4e3f-befa-d1f7fe4ab682@oracle.com> References: <693ec446-403b-4e3f-befa-d1f7fe4ab682@oracle.com> Message-ID: I see the ask as slightly different from stable values. Stable values are really two things - a compile time constant that can be inlined and is always lazily initialized exactly once. This ask is for the second part - being able to lazily initialize a value with the same exactly once guarantees without the restriction that the keys must be known at compile time. I have personally used Guava's Suppliers.memoize(Supplier s) or even ConcurrenthashMap's computeIfAbsent for my use-cases and in those I didn't really care about inlining/folding, just the lazy aspect of it. To me it is not a huge deal of not having a language or even a standard Java API for it; I have managed without it so far. Regards Swaranga On Tue, Sep 2, 2025 at 4:15?PM Maurizio Cimadamore < maurizio.cimadamore at oracle.com> wrote: > Hi David, > Thanks for your feedback. > > The factories that we provide, like StableValue::list and StableValue::map > cover common cases where the list/map elements are known upfront, but > should be computed lazily. > > In other words you might think of these factories as lazy variants of > List::of and Map::of. Both kinds of factories return an unmodifiable > collection -- that is a collection whose size is fixed, and that rejects > update operations. > > I understand that you would like to create a "stable" map, whose > key/values are not known upfront -- more specifically, where the keys are > only known dynamically. > > I believe in these cases the "win" for using a stable map in the first > place is much less obvious. If the map can grow dynamically (e.g. because > you don't know how many entries you might be adding to it) you are probably > looking at an implementation that has some way to "resize" itself -- which > makes using something like a stable construct much harder. For instance, > adding new entries on a map might cause the underlying array of buckets to > be reallocated, and existing entries to be rehashed in the new (larger) > bucket array. This means that the bucket array itself will need to be > updated several times during the lifecycle of the map, making it not stable > (remember: stable means "updated at most once"). > > If some constraints are relaxed, e.g. maybe you know how many entries you > are going to add in your map -- that might make the problem easier, as now > we're back in a situation where we now the size of the underlying storage. > For instance one can have a specialized hash table implementation backed by > a linear array (of an appropriate size), and then use linear probing to > store entries in the linear array. Since the size is bounded, the size of > the entries linear array is also bounded, and we can then make that linear > array stable (e.g. use a stable list). > > Since such a "fixed size" hash map would be quite specialized, we did not > see yet enough motivation for adding it to the JDK -- especially given that > developers should be able to define such constructs on top of the > StableValue API (in fact _all_ the existing provided factories are defined > in terms of the Stable Value API). > > But it's not just about whether the size is known or not -- in order for > the JVM to be able to apply any sort of constant-folding optimization to > the map access, you need the key to be a constant (e.g. either some > compile-time constant, or the value of a static final field, or the > contents of some other stable value). Only then we can fold the entire map > access expression (if we're lucky). But in the example you provide, the key > provided to Map::get is just a random class name you get from the current > stack. So there's very little for the JIT to optimize here. If the input > (the key) is not known, then the access expression (Map::get) cannot be > optimized. > > In other words, the case of a fully dynamic list/map that you propose just > doesn't seem a great fit for stability, in the sense that you likely won't > get any performance improvement (over a concurrent hash map) by using some > sort of stable map there. > > Maurizio > > > On 02/09/2025 02:45, david Grajales wrote: > > > > ---------- Forwarded message --------- > De: david Grajales > Date: lun, 1 sept 2025 a la(s) 8:43?p.m. > Subject: Feedback about StableValues(Preview) > To: > > > Subject: Feedback and Questions on JEP 8359894 - Stable Values API > > Dear Java core-libs development team, > > Please accept my sincere gratitude and compliments for your ongoing > dedication to improving the Java platform. The continuous innovation and > thoughtful evolution of Java is truly appreciated by the developer > community. > > I have been experimenting with the Stable Values API (JEP 8359894) in a > development branch of a service at my company, and I would like to share > some observations and seek your guidance on a particular use case. > > > Current Implementation > > Currently, I have a logging utility that follows a standard pattern for > lazy value computation: > > class DbLogUtility { > private static final ConcurrentMap loggerCache = new > ConcurrentHashMap<>(); > > private DbLogUtility(){} > > private static Logger getLogger() { > var className = > Thread.currentThread().getStackTrace()[3].getClassName(); > return loggerCache.computeIfAbsent(className, > LoggerFactory::getLogger); > } > public static void logError(){ > //.... implementation detail > } > } > > Challenge with Stable Values API > > When attempting to migrate this code to use the Stable Values API, I > encountered a fundamental limitation: the API requires keys to be known at > compile time. The current factory methods (StableValue.function(Set, > Function) and StableValue.intFunction(int, IntFunction)) expect > predefined key sets or bounded integer ranges. > > This design constraint makes it challenging to handle dynamic key > discovery scenarios, which are quite common in enterprise applications for: > > - Logger caching by dynamically discovered class names > - Configuration caching by runtime-determined keys > - Resource pooling with dynamic identifiers > - Etc. > > > Questions and Feedback > > 1. *Am I missing an intended usage pattern?* Is there a recommended > approach within the current API design for handling dynamic key discovery > while maintaining the performance benefits of stable values? > 2. Would you consider any of these potential enhancements: > - Integration of stable value optimizations directly into existing > collection APIs (similar to how some methods have been added to List and > Map interfaces for better discoverability) > - A hybrid approach that provides stable value benefits for > dynamically discovered keys > 3. Do you envision the Stable Values API as primarily serving > compile-time-known scenarios, with dynamic use cases continuing to rely on > traditional concurrent collections? > > Thank you for your time and consideration. I would be grateful for any > guidance or clarification you might provide on these questions. If there > are planned enhancements or alternative patterns I should consider, I would > very much appreciate your insights. > > Best regards, and always yours. > > David Grajales C?rdenas. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.1993grajales at gmail.com Tue Sep 2 23:54:12 2025 From: david.1993grajales at gmail.com (david Grajales) Date: Tue, 2 Sep 2025 18:54:12 -0500 Subject: Fwd: Feedback about StableValues(Preview) In-Reply-To: <693ec446-403b-4e3f-befa-d1f7fe4ab682@oracle.com> References: <693ec446-403b-4e3f-befa-d1f7fe4ab682@oracle.com> Message-ID: Hello Maurizio. Thanks for the quick response and explanation. So, in short the stable values are not mean for very dynamic scenarios because the API prioritize performance and efficiency at runtime and sacrifices flexibility in exchange; which translates as " the keys or at least the size of the stableValue must be know at compile time". It's a fair trade off, I am keeping it in mind for the next tests and experiments I do. Best regards. El mar, 2 de sept de 2025, 6:14?p.m., Maurizio Cimadamore < maurizio.cimadamore at oracle.com> escribi?: > Hi David, > Thanks for your feedback. > > The factories that we provide, like StableValue::list and StableValue::map > cover common cases where the list/map elements are known upfront, but > should be computed lazily. > > In other words you might think of these factories as lazy variants of > List::of and Map::of. Both kinds of factories return an unmodifiable > collection -- that is a collection whose size is fixed, and that rejects > update operations. > > I understand that you would like to create a "stable" map, whose > key/values are not known upfront -- more specifically, where the keys are > only known dynamically. > > I believe in these cases the "win" for using a stable map in the first > place is much less obvious. If the map can grow dynamically (e.g. because > you don't know how many entries you might be adding to it) you are probably > looking at an implementation that has some way to "resize" itself -- which > makes using something like a stable construct much harder. For instance, > adding new entries on a map might cause the underlying array of buckets to > be reallocated, and existing entries to be rehashed in the new (larger) > bucket array. This means that the bucket array itself will need to be > updated several times during the lifecycle of the map, making it not stable > (remember: stable means "updated at most once"). > > If some constraints are relaxed, e.g. maybe you know how many entries you > are going to add in your map -- that might make the problem easier, as now > we're back in a situation where we now the size of the underlying storage. > For instance one can have a specialized hash table implementation backed by > a linear array (of an appropriate size), and then use linear probing to > store entries in the linear array. Since the size is bounded, the size of > the entries linear array is also bounded, and we can then make that linear > array stable (e.g. use a stable list). > > Since such a "fixed size" hash map would be quite specialized, we did not > see yet enough motivation for adding it to the JDK -- especially given that > developers should be able to define such constructs on top of the > StableValue API (in fact _all_ the existing provided factories are defined > in terms of the Stable Value API). > > But it's not just about whether the size is known or not -- in order for > the JVM to be able to apply any sort of constant-folding optimization to > the map access, you need the key to be a constant (e.g. either some > compile-time constant, or the value of a static final field, or the > contents of some other stable value). Only then we can fold the entire map > access expression (if we're lucky). But in the example you provide, the key > provided to Map::get is just a random class name you get from the current > stack. So there's very little for the JIT to optimize here. If the input > (the key) is not known, then the access expression (Map::get) cannot be > optimized. > > In other words, the case of a fully dynamic list/map that you propose just > doesn't seem a great fit for stability, in the sense that you likely won't > get any performance improvement (over a concurrent hash map) by using some > sort of stable map there. > > Maurizio > > > On 02/09/2025 02:45, david Grajales wrote: > > > > ---------- Forwarded message --------- > De: david Grajales > Date: lun, 1 sept 2025 a la(s) 8:43?p.m. > Subject: Feedback about StableValues(Preview) > To: > > > Subject: Feedback and Questions on JEP 8359894 - Stable Values API > > Dear Java core-libs development team, > > Please accept my sincere gratitude and compliments for your ongoing > dedication to improving the Java platform. The continuous innovation and > thoughtful evolution of Java is truly appreciated by the developer > community. > > I have been experimenting with the Stable Values API (JEP 8359894) in a > development branch of a service at my company, and I would like to share > some observations and seek your guidance on a particular use case. > > > Current Implementation > > Currently, I have a logging utility that follows a standard pattern for > lazy value computation: > > class DbLogUtility { > private static final ConcurrentMap loggerCache = new > ConcurrentHashMap<>(); > > private DbLogUtility(){} > > private static Logger getLogger() { > var className = > Thread.currentThread().getStackTrace()[3].getClassName(); > return loggerCache.computeIfAbsent(className, > LoggerFactory::getLogger); > } > public static void logError(){ > //.... implementation detail > } > } > > Challenge with Stable Values API > > When attempting to migrate this code to use the Stable Values API, I > encountered a fundamental limitation: the API requires keys to be known at > compile time. The current factory methods (StableValue.function(Set, > Function) and StableValue.intFunction(int, IntFunction)) expect > predefined key sets or bounded integer ranges. > > This design constraint makes it challenging to handle dynamic key > discovery scenarios, which are quite common in enterprise applications for: > > - Logger caching by dynamically discovered class names > - Configuration caching by runtime-determined keys > - Resource pooling with dynamic identifiers > - Etc. > > > Questions and Feedback > > 1. *Am I missing an intended usage pattern?* Is there a recommended > approach within the current API design for handling dynamic key discovery > while maintaining the performance benefits of stable values? > 2. Would you consider any of these potential enhancements: > - Integration of stable value optimizations directly into existing > collection APIs (similar to how some methods have been added to List and > Map interfaces for better discoverability) > - A hybrid approach that provides stable value benefits for > dynamically discovered keys > 3. Do you envision the Stable Values API as primarily serving > compile-time-known scenarios, with dynamic use cases continuing to rely on > traditional concurrent collections? > > Thank you for your time and consideration. I would be grateful for any > guidance or clarification you might provide on these questions. If there > are planned enhancements or alternative patterns I should consider, I would > very much appreciate your insights. > > Best regards, and always yours. > > David Grajales C?rdenas. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maurizio.cimadamore at oracle.com Wed Sep 3 00:03:00 2025 From: maurizio.cimadamore at oracle.com (Maurizio Cimadamore) Date: Wed, 3 Sep 2025 01:03:00 +0100 Subject: Fwd: Feedback about StableValues(Preview) In-Reply-To: References: <693ec446-403b-4e3f-befa-d1f7fe4ab682@oracle.com> Message-ID: On 03/09/2025 00:54, david Grajales wrote: > Hello Maurizio. Thanks for the quick response and explanation. > > So, in short the stable values are not mean for very dynamic scenarios > because the API prioritize performance and efficiency at runtime and > sacrifices flexibility in exchange; which translates as " the keys or > at least the size of the stableValue must be know at compile time". I think so -- e.g. to get better performance than a concurrent hash map you need to give up some flexibility. If you don't, that's fine, but then concurrent hashmap is likely the best implementation for your use case. I think the main use cases for the stable lists and maps we provide is to deal with groups of stable values. E.g. think of a situation where a class has maybe 10 fields like Logger -- fields that are lazily initialized. Do you declare 10 holder classes? Or 10 stable values, each with its own update logic? With a stable map you can actually group these stable values together -- which means you only need one lambda, not 10. Maurizio > > Best regards. > > El mar, 2 de sept de 2025, 6:14?p.m., Maurizio Cimadamore > escribi?: > > Hi David, > Thanks for your feedback. > > The factories that we provide, like StableValue::list and > StableValue::map cover common cases where the list/map elements > are known upfront, but should be computed lazily. > > In other words you might think of these factories as lazy variants > of List::of and Map::of. Both kinds of factories return an > unmodifiable collection -- that is a collection whose size is > fixed, and that rejects update operations. > > I understand that you would like to create a "stable" map, whose > key/values are not known upfront -- more specifically, where the > keys are only known dynamically. > > I believe in these cases the "win" for using a stable map in the > first place is much less obvious. If the map can grow dynamically > (e.g. because you don't know how many entries you might be adding > to it) you are probably looking at an implementation that has some > way to "resize" itself -- which makes using something like a > stable construct much harder. For instance, adding new entries on > a map might cause the underlying array of buckets to be > reallocated, and existing entries to be rehashed in the new > (larger) bucket array. This means that the bucket array itself > will need to be updated several times during the lifecycle of the > map, making it not stable (remember: stable means "updated at most > once"). > > If some constraints are relaxed, e.g. maybe you know how many > entries you are going to add in your map -- that might make the > problem easier, as now we're back in a situation where we now the > size of the underlying storage. For instance one can have a > specialized hash table implementation backed by a linear array (of > an appropriate size), and then use linear probing to store entries > in the linear array. Since the size is bounded, the size of the > entries linear array is also bounded, and we can then make that > linear array stable (e.g. use a stable list). > > Since such a "fixed size" hash map would be quite specialized, we > did not see yet enough motivation for adding it to the JDK -- > especially given that developers should be able to define such > constructs on top of the StableValue API (in fact _all_ the > existing provided factories are defined in terms of the Stable > Value API). > > But it's not just about whether the size is known or not -- in > order for the JVM to be able to apply any sort of constant-folding > optimization to the map access, you need the key to be a constant > (e.g. either some compile-time constant, or the value of a static > final field, or the contents of some other stable value). Only > then we can fold the entire map access expression (if we're > lucky). But in the example you provide, the key provided to > Map::get is just a random class name you get from the current > stack. So there's very little for the JIT to optimize here. If the > input (the key) is not known, then the access expression > (Map::get) cannot be optimized. > > In other words, the case of a fully dynamic list/map that you > propose just doesn't seem a great fit for stability, in the sense > that you likely won't get any performance improvement (over a > concurrent hash map) by using some sort of stable map there. > > Maurizio > > > On 02/09/2025 02:45, david Grajales wrote: >> >> >> ---------- Forwarded message --------- >> De: *david Grajales* >> Date: lun, 1 sept 2025 a la(s) 8:43?p.m. >> Subject: Feedback about StableValues(Preview) >> To: >> >> >> Subject: Feedback and Questions on JEP 8359894 - Stable Values API >> >> Dear Java core-libs development team, >> >> Please accept my sincere gratitude and compliments for your >> ongoing dedication to improving the Java platform. The continuous >> innovation and thoughtful evolution of Java is truly appreciated >> by the developer community. >> >> I have been experimenting with the Stable Values API (JEP >> 8359894) in a development branch of a service at my company, and >> I would like to share some observations and seek your guidance on >> a particular use case. >> >> >> Current Implementation >> >> Currently, I have a logging utility that follows a standard >> pattern for lazy value computation: >> >> >> class DbLogUtility { >> ? ? private static final ConcurrentMap >> loggerCache = new ConcurrentHashMap<>(); >> >> ? ? private DbLogUtility(){} >> >> ? ? private static Logger getLogger() { >> ? ? ? ? var className = >> Thread.currentThread().getStackTrace()[3].getClassName(); >> ? ? ? ? return loggerCache.computeIfAbsent(className, >> LoggerFactory::getLogger); >> ? ? } >> ? ? public static void logError(){ >> ? ? ? ? //.... implementation detail >> ? ? } >> } >> >> >> Challenge with Stable Values API >> >> When attempting to migrate this code to use the Stable Values >> API, I encountered a fundamental limitation: the API requires >> keys to be known at compile time. The current factory methods >> (|StableValue.function(Set, Function)| and >> |StableValue.intFunction(int, IntFunction)|) expect predefined >> key sets or bounded integer ranges. >> >> This design constraint makes it challenging to handle dynamic key >> discovery scenarios, which are quite common in enterprise >> applications for: >> >> * Logger caching by dynamically discovered class names >> * Configuration caching by runtime-determined keys >> * Resource pooling with dynamic identifiers >> * Etc. >> >> >> Questions and Feedback >> >> 1. *Am I missing an intended usage pattern?* Is there a >> recommended approach within the current API design for >> handling dynamic key discovery while maintaining the >> performance benefits of stable values? >> 2. ?Would you consider any of these potential enhancements: >> * Integration of stable value optimizations directly into >> existing collection APIs (similar to how some methods >> have been added to List and Map interfaces for better >> discoverability) >> * A hybrid approach that provides stable value benefits for >> dynamically discovered keys >> 3. Do you envision the Stable Values API as primarily serving >> compile-time-known scenarios, with dynamic use cases >> continuing to rely on traditional concurrent collections? >> >> Thank you for your time and consideration. I would be grateful >> for any guidance or clarification you might provide on these >> questions. If there are planned enhancements or alternative >> patterns I should consider, I would very much appreciate your >> insights. >> >> Best regards, and always yours. >> >> David Grajales C?rdenas. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sritter at azul.com Fri Sep 5 10:59:51 2025 From: sritter at azul.com (Simon Ritter) Date: Fri, 5 Sep 2025 11:59:51 +0100 Subject: Question on Primitive Types in Patterns Message-ID: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> Hi, I've been giving a presentation on Patterns in the Java language and including some puzzles.? The recent inclusion of Primitive Types in Patterns has provided some interesting material.? I currently have one puzzle that I can't quite explain; hopefully someone on the mailing list can provide clarification. Let's start with this simple example: ? int x = getX();? // x is 42 ? switch (x) { ??? case byte b -> System.out.println("byte"); ??? case int i -> System.out.println("int"); ? } Here we have a runtime check, which establishes that the conversion from int to byte is exact, as there is no loss of information. If we reverse the order of the cases: ? switch (x) { ??? case int i -> System.out.println("int"); ??? case byte b -> System.out.println("byte"); ? } The code will not compile, as the int case dominates the byte case. So far, so good. However, if we change the int case to use a wrapper class: ? switch (x) { ??? case Integer i -> System.out.println("int"); ??? case byte b -> System.out.println("byte"); ? } the code will compile and the result is 'int'. If I look at JEP 507, under the section on Safety of conversions, it states that "...boxing conversions and widening reference conversions are unconditionally exact."? The compiler is autoboxing the int, x, to create an Integer object, which always matches the first case. What I can't explain is why the compiler does not still see this as pattern dominance?? No value of x will ever result in the switch matching on byte so the code is unreachable. Thanks in advance, Simon. From dvohra16 at gmail.com Fri Sep 5 13:02:34 2025 From: dvohra16 at gmail.com (Deepak Vohra) Date: Fri, 5 Sep 2025 09:02:34 -0400 Subject: Question on Primitive Types in Patterns In-Reply-To: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> References: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> Message-ID: Are you using a Java version that supports pattern matching for switch (JDK 17+ as a preview feature, standardized in JDK 21)? Or, it is a bug. On Fri, Sep 5, 2025 at 7:09?AM Simon Ritter wrote: > Hi, > I've been giving a presentation on Patterns in the Java language and > including some puzzles. The recent inclusion of Primitive Types in > Patterns has provided some interesting material. I currently have one > puzzle that I can't quite explain; hopefully someone on the mailing list > can provide clarification. > > Let's start with this simple example: > > int x = getX(); // x is 42 > > switch (x) { > case byte b -> System.out.println("byte"); > case int i -> System.out.println("int"); > } > > Here we have a runtime check, which establishes that the conversion from > int to byte is exact, as there is no loss of information. > > If we reverse the order of the cases: > > switch (x) { > case int i -> System.out.println("int"); > case byte b -> System.out.println("byte"); > } > > The code will not compile, as the int case dominates the byte case. > > So far, so good. > > However, if we change the int case to use a wrapper class: > > switch (x) { > case Integer i -> System.out.println("int"); > case byte b -> System.out.println("byte"); > } > > the code will compile and the result is 'int'. > > If I look at JEP 507, under the section on Safety of conversions, it > states that "...boxing conversions and widening reference conversions > are unconditionally exact." The compiler is autoboxing the int, x, to > create an Integer object, which always matches the first case. > > What I can't explain is why the compiler does not still see this as > pattern dominance? No value of x will ever result in the switch > matching on byte so the code is unreachable. > > Thanks in advance, > > Simon. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dvohra16 at gmail.com Fri Sep 5 13:15:30 2025 From: dvohra16 at gmail.com (Deepak Vohra) Date: Fri, 5 Sep 2025 09:15:30 -0400 Subject: Question on Primitive Types in Patterns In-Reply-To: References: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> Message-ID: The *Integer* can't auto-box *byte*; therefore, it is not pattern dominance. On Fri, Sep 5, 2025 at 9:02?AM Deepak Vohra wrote: > Are you using a Java version that supports pattern matching for switch > (JDK 17+ as a preview feature, standardized in JDK 21)? Or, it is a bug. > > On Fri, Sep 5, 2025 at 7:09?AM Simon Ritter wrote: > >> Hi, >> I've been giving a presentation on Patterns in the Java language and >> including some puzzles. The recent inclusion of Primitive Types in >> Patterns has provided some interesting material. I currently have one >> puzzle that I can't quite explain; hopefully someone on the mailing list >> can provide clarification. >> >> Let's start with this simple example: >> >> int x = getX(); // x is 42 >> >> switch (x) { >> case byte b -> System.out.println("byte"); >> case int i -> System.out.println("int"); >> } >> >> Here we have a runtime check, which establishes that the conversion from >> int to byte is exact, as there is no loss of information. >> >> If we reverse the order of the cases: >> >> switch (x) { >> case int i -> System.out.println("int"); >> case byte b -> System.out.println("byte"); >> } >> >> The code will not compile, as the int case dominates the byte case. >> >> So far, so good. >> >> However, if we change the int case to use a wrapper class: >> >> switch (x) { >> case Integer i -> System.out.println("int"); >> case byte b -> System.out.println("byte"); >> } >> >> the code will compile and the result is 'int'. >> >> If I look at JEP 507, under the section on Safety of conversions, it >> states that "...boxing conversions and widening reference conversions >> are unconditionally exact." The compiler is autoboxing the int, x, to >> create an Integer object, which always matches the first case. >> >> What I can't explain is why the compiler does not still see this as >> pattern dominance? No value of x will ever result in the switch >> matching on byte so the code is unreachable. >> >> Thanks in advance, >> >> Simon. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Fri Sep 5 14:30:06 2025 From: brian.goetz at oracle.com (Brian Goetz) Date: Fri, 5 Sep 2025 10:30:06 -0400 Subject: Question on Primitive Types in Patterns In-Reply-To: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> References: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> Message-ID: > > I've been giving a presentation on Patterns in the Java language and > including some puzzles.? The recent inclusion of Primitive Types in > Patterns has provided some interesting material.? I currently have one > puzzle that I can't quite explain; hopefully someone on the mailing > list can provide clarification. On a pedagogical note, I'd like to make a case for "Java Puzzlers Talks Considered Harmful."? They can be fun to write (which is why we get so many of them), but I have found that the vast majority of time, a good chunk of the audience comes away with notions that are closer to "XYZ is broken", "XYZ is just all ad-hoc complexity", or "XYZ has no organizing design principle" -- even though this is rarely the intent of the presenter (often the opposite, in fact.) When Josh and Neal started with Puzzlers, they could at least come from the position of "these were some arguably-mistakes that *we* made".? Not only did this give them credibility that almost no other "Java Puzzlers" presenter could have, but it pushed them to dig deeper to present language design tradeoffs _from the perspective of language designers_. I cannot count the number of times where someone has seen a "puzzler" presentation or blog, and learned the exact opposite lesson than it was trying to teach.? ?This is an extraordinarily difficult format for teaching.? Worse, the nature of a "Puzzlers" talk requires having a bunch of them, and -- even when Josh and Neal did it -- there were always a few that didn't live up to the standards; the format often overtakes the message.? It is just a truly punishing format, one that requires world-class pedagogy and impeccable preparation to avoid the all-too-common outcome where many in the audience learn the wrong lesson (often reinforcing their preconceived but ill-examined assumptions about "X is bad.") From dvohra16 at gmail.com Fri Sep 5 16:02:24 2025 From: dvohra16 at gmail.com (Deepak Vohra) Date: Fri, 5 Sep 2025 12:02:24 -0400 Subject: Question on Primitive Types in Patterns In-Reply-To: References: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> Message-ID: Detailed answer: The compiler does check for pattern dominance to prevent unreachable code; however, it uses the formal definition of pattern dominance. A pattern P1 (the first case) dominates a pattern P2 (the second case) if every possible value that matches P2 would also match P1. The compiler determines pattern dominance by comparing the patterns themselves, without considering the type of the variable (int x) being switched on. Pattern P1: Integer i Pattern P2: byte b Does every value that matches* byte b* also match *Integer i*? No. A value matches *byte b* if it's an instance of the *Byte* wrapper class. A value matches *Integer i* if it's an instance of the *Integer* wrapper class. Since a *Byte* object is never an *Integer* object (they are separate classes), *Integer i* does not dominate *byte b*. On Fri, Sep 5, 2025 at 9:15?AM Deepak Vohra wrote: > The *Integer* can't auto-box *byte*; therefore, it is not pattern > dominance. > > On Fri, Sep 5, 2025 at 9:02?AM Deepak Vohra wrote: > >> Are you using a Java version that supports pattern matching for switch >> (JDK 17+ as a preview feature, standardized in JDK 21)? Or, it is a bug. >> >> On Fri, Sep 5, 2025 at 7:09?AM Simon Ritter wrote: >> >>> Hi, >>> I've been giving a presentation on Patterns in the Java language and >>> including some puzzles. The recent inclusion of Primitive Types in >>> Patterns has provided some interesting material. I currently have one >>> puzzle that I can't quite explain; hopefully someone on the mailing list >>> can provide clarification. >>> >>> Let's start with this simple example: >>> >>> int x = getX(); // x is 42 >>> >>> switch (x) { >>> case byte b -> System.out.println("byte"); >>> case int i -> System.out.println("int"); >>> } >>> >>> Here we have a runtime check, which establishes that the conversion from >>> int to byte is exact, as there is no loss of information. >>> >>> If we reverse the order of the cases: >>> >>> switch (x) { >>> case int i -> System.out.println("int"); >>> case byte b -> System.out.println("byte"); >>> } >>> >>> The code will not compile, as the int case dominates the byte case. >>> >>> So far, so good. >>> >>> However, if we change the int case to use a wrapper class: >>> >>> switch (x) { >>> case Integer i -> System.out.println("int"); >>> case byte b -> System.out.println("byte"); >>> } >>> >>> the code will compile and the result is 'int'. >>> >>> If I look at JEP 507, under the section on Safety of conversions, it >>> states that "...boxing conversions and widening reference conversions >>> are unconditionally exact." The compiler is autoboxing the int, x, to >>> create an Integer object, which always matches the first case. >>> >>> What I can't explain is why the compiler does not still see this as >>> pattern dominance? No value of x will ever result in the switch >>> matching on byte so the code is unreachable. >>> >>> Thanks in advance, >>> >>> Simon. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Fri Sep 5 16:46:14 2025 From: forax at univ-mlv.fr (Remi Forax) Date: Fri, 5 Sep 2025 18:46:14 +0200 (CEST) Subject: Question on Primitive Types in Patterns In-Reply-To: References: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> Message-ID: <1782281663.23847555.1757090774349.JavaMail.zimbra@univ-eiffel.fr> ----- Original Message ----- > From: "Brian Goetz" > To: "Simon Ritter" , "amber-dev" > Sent: Friday, September 5, 2025 4:30:06 PM > Subject: Re: Question on Primitive Types in Patterns >> >> I've been giving a presentation on Patterns in the Java language and >> including some puzzles.? The recent inclusion of Primitive Types in >> Patterns has provided some interesting material.? I currently have one >> puzzle that I can't quite explain; hopefully someone on the mailing >> list can provide clarification. > > On a pedagogical note, I'd like to make a case for "Java Puzzlers Talks > Considered Harmful."? They can be fun to write (which is why we get so > many of them), but I have found that the vast majority of time, a good > chunk of the audience comes away with notions that are closer to "XYZ is > broken", "XYZ is just all ad-hoc complexity", or "XYZ has no organizing > design principle" -- even though this is rarely the intent of the > presenter (often the opposite, in fact.) > > When Josh and Neal started with Puzzlers, they could at least come from > the position of "these were some arguably-mistakes that *we* made".? Not > only did this give them credibility that almost no other "Java Puzzlers" > presenter could have, but it pushed them to dig deeper to present > language design tradeoffs _from the perspective of language designers_. > > I cannot count the number of times where someone has seen a "puzzler" > presentation or blog, and learned the exact opposite lesson than it was > trying to teach.? ?This is an extraordinarily difficult format for > teaching.? Worse, the nature of a "Puzzlers" talk requires having a > bunch of them, and -- even when Josh and Neal did it -- there were > always a few that didn't live up to the standards; the format often > overtakes the message.? It is just a truly punishing format, one that > requires world-class pedagogy and impeccable preparation to avoid the > all-too-common outcome where many in the audience learn the wrong lesson > (often reinforcing their preconceived but ill-examined assumptions about > "X is bad.") This is a preview feature, so for me finding corner cases is fair game. It help us to understand the tradeoffs of this feature. There are always puzzlers, for every features, but there are two categories of puzzlers: - those that enlighten you (like this is why Foo behave different as a top level pattern or as an inner pattern), as you said, it's quite hard to convey that in a presentation. - those that just show the designers being too clever (lets make ?: boxing rules different) or the feature being consistent only with itself and not with the rest of the world (the X cross Y problem). regards, R?mi From davidalayachew at gmail.com Fri Sep 5 20:16:46 2025 From: davidalayachew at gmail.com (David Alayachew) Date: Fri, 5 Sep 2025 16:16:46 -0400 Subject: Question on Primitive Types in Patterns In-Reply-To: <1782281663.23847555.1757090774349.JavaMail.zimbra@univ-eiffel.fr> References: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> <1782281663.23847555.1757090774349.JavaMail.zimbra@univ-eiffel.fr> Message-ID: Puzzlers aside, can we get a definitive answer on why this is this way? I want to understand the semantics by the compiler that made this possible, so that I can understand why it is being considered legal by the compiler. Is what Deepak says is true? On Fri, Sep 5, 2025 at 12:46?PM Remi Forax wrote: > ----- Original Message ----- > > From: "Brian Goetz" > > To: "Simon Ritter" , "amber-dev" < > amber-dev at openjdk.org> > > Sent: Friday, September 5, 2025 4:30:06 PM > > Subject: Re: Question on Primitive Types in Patterns > > >> > >> I've been giving a presentation on Patterns in the Java language and > >> including some puzzles. The recent inclusion of Primitive Types in > >> Patterns has provided some interesting material. I currently have one > >> puzzle that I can't quite explain; hopefully someone on the mailing > >> list can provide clarification. > > > > On a pedagogical note, I'd like to make a case for "Java Puzzlers Talks > > Considered Harmful." They can be fun to write (which is why we get so > > many of them), but I have found that the vast majority of time, a good > > chunk of the audience comes away with notions that are closer to "XYZ is > > broken", "XYZ is just all ad-hoc complexity", or "XYZ has no organizing > > design principle" -- even though this is rarely the intent of the > > presenter (often the opposite, in fact.) > > > > When Josh and Neal started with Puzzlers, they could at least come from > > the position of "these were some arguably-mistakes that *we* made". Not > > only did this give them credibility that almost no other "Java Puzzlers" > > presenter could have, but it pushed them to dig deeper to present > > language design tradeoffs _from the perspective of language designers_. > > > > I cannot count the number of times where someone has seen a "puzzler" > > presentation or blog, and learned the exact opposite lesson than it was > > trying to teach. This is an extraordinarily difficult format for > > teaching. Worse, the nature of a "Puzzlers" talk requires having a > > bunch of them, and -- even when Josh and Neal did it -- there were > > always a few that didn't live up to the standards; the format often > > overtakes the message. It is just a truly punishing format, one that > > requires world-class pedagogy and impeccable preparation to avoid the > > all-too-common outcome where many in the audience learn the wrong lesson > > (often reinforcing their preconceived but ill-examined assumptions about > > "X is bad.") > > This is a preview feature, so for me finding corner cases is fair game. > It help us to understand the tradeoffs of this feature. > > There are always puzzlers, for every features, but there are two > categories of puzzlers: > - those that enlighten you (like this is why Foo behave different as a top > level pattern or as an inner pattern), > as you said, it's quite hard to convey that in a presentation. > - those that just show the designers being too clever (lets make ?: boxing > rules different) > or the feature being consistent only with itself and not with the rest > of the world (the X cross Y problem). > > regards, > R?mi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dvohra16 at gmail.com Sun Sep 7 17:57:56 2025 From: dvohra16 at gmail.com (Deepak Vohra) Date: Sun, 7 Sep 2025 13:57:56 -0400 Subject: Question on Primitive Types in Patterns In-Reply-To: References: <8516f22c-e0be-9c8e-aa34-bf9b608886fd@azul.com> <1782281663.23847555.1757090774349.JavaMail.zimbra@univ-eiffel.fr> Message-ID: The question's emphasis is wrong. " The compiler is autoboxing the int, x, to create an Integer object, which always matches the first case." But, the selector expression is not significant. It is the case label patterns that determine if the type of a case label pattern is a subtype of the type of another case label pattern that appears before it. On Fri, Sep 5, 2025 at 4:17?PM David Alayachew wrote: > Puzzlers aside, can we get a definitive answer on why this is this way? I > want to understand the semantics by the compiler that made this possible, > so that I can understand why it is being considered legal by the compiler. > Is what Deepak says is true? > > On Fri, Sep 5, 2025 at 12:46?PM Remi Forax wrote: > >> ----- Original Message ----- >> > From: "Brian Goetz" >> > To: "Simon Ritter" , "amber-dev" < >> amber-dev at openjdk.org> >> > Sent: Friday, September 5, 2025 4:30:06 PM >> > Subject: Re: Question on Primitive Types in Patterns >> >> >> >> >> I've been giving a presentation on Patterns in the Java language and >> >> including some puzzles. The recent inclusion of Primitive Types in >> >> Patterns has provided some interesting material. I currently have one >> >> puzzle that I can't quite explain; hopefully someone on the mailing >> >> list can provide clarification. >> > >> > On a pedagogical note, I'd like to make a case for "Java Puzzlers Talks >> > Considered Harmful." They can be fun to write (which is why we get so >> > many of them), but I have found that the vast majority of time, a good >> > chunk of the audience comes away with notions that are closer to "XYZ is >> > broken", "XYZ is just all ad-hoc complexity", or "XYZ has no organizing >> > design principle" -- even though this is rarely the intent of the >> > presenter (often the opposite, in fact.) >> > >> > When Josh and Neal started with Puzzlers, they could at least come from >> > the position of "these were some arguably-mistakes that *we* made". Not >> > only did this give them credibility that almost no other "Java Puzzlers" >> > presenter could have, but it pushed them to dig deeper to present >> > language design tradeoffs _from the perspective of language designers_. >> > >> > I cannot count the number of times where someone has seen a "puzzler" >> > presentation or blog, and learned the exact opposite lesson than it was >> > trying to teach. This is an extraordinarily difficult format for >> > teaching. Worse, the nature of a "Puzzlers" talk requires having a >> > bunch of them, and -- even when Josh and Neal did it -- there were >> > always a few that didn't live up to the standards; the format often >> > overtakes the message. It is just a truly punishing format, one that >> > requires world-class pedagogy and impeccable preparation to avoid the >> > all-too-common outcome where many in the audience learn the wrong lesson >> > (often reinforcing their preconceived but ill-examined assumptions about >> > "X is bad.") >> >> This is a preview feature, so for me finding corner cases is fair game. >> It help us to understand the tradeoffs of this feature. >> >> There are always puzzlers, for every features, but there are two >> categories of puzzlers: >> - those that enlighten you (like this is why Foo behave different as a >> top level pattern or as an inner pattern), >> as you said, it's quite hard to convey that in a presentation. >> - those that just show the designers being too clever (lets make ?: >> boxing rules different) >> or the feature being consistent only with itself and not with the rest >> of the world (the X cross Y problem). >> >> regards, >> R?mi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: