From johannes.spangenberg at hotmail.de Fri Dec 1 00:30:04 2023 From: johannes.spangenberg at hotmail.de (Johannes Spangenberg) Date: Fri, 1 Dec 2023 01:30:04 +0100 Subject: Trailing Commas In-Reply-To: References: Message-ID: Thanks for the responses. > I don't really expect this feature to happen, but I feel like this > thread ought to include some mention of the actual benefits of the > feature? You are absolutely right. Thanks for listing them. The Scala project also listed advantages and drawbacks in SIP-27 - Trailing Commas before its implementation. > For every person who has a strong preference in favour of allowing > trailing commas where they?re not allowed today you?ll find another > with an equally strong preference against allowing them. Additionally, > if this option exists, people will want to write or update style > guides preferring one style or the other. So the result is a lot of > time, and possibly completely disproportionate emotion, spent for > what, at best, is little gain. To me this sounds like the kind of > change that is bound to require more effort ? not on the > implementation perhaps, but in debate ? than it?s worth. If I understand correctly, you're pointing out the effort required to determine if the entire community would support this change? That is a fair point. I still want to mention that in my personal experience outside the Java ecosystem, discussions around trailing commas haven't been as contentious as those involving whitespaces, tabs, or line breaks. They've generally leaned towards supporting trailing commas. This is why I unfortunately failed to list the advantages as I did not consider trailing commas particularly controversial by themself. This might be a personal bias of course. Nevertheless, I acknowledge that there might be more pressing topics at the moment. > ?- every change we made that "makes life easier for code generators" > makes life harder for parsers, and it is not clear that privileging > one over the other is a good idea; I think the main drawback lies in the requirement to change existing tooling. However, tools have to adapt to new Java versions all the time. Adding support for trailing commas to a parser, in my experience, requires minimal changes and doesn't significantly complicate the implementation. When I added support for trailing commas to my parser of the Nix Expression Language, I only had to make a small change to a single line (besides adding tests). > Your point about smaller diffs and reordering lists is correct, and > (I'm guessing) what originally motivated this feature. Yes, you are right. It's all about making it easier to work with lists. > However, that motivation only applies when the items you are listing > are normally or typically placed on separate lines. I think putting method arguments onto separate lines is also somewhat common, especially when considering varargs. I also imagine it quite useful for the component list in records. For the remaining language constructs, it might be less common, but I have definitely seen cases where other constructs were placed on separate lines as well. Thanks again. I will take from this conversation that trailing commas are unlikely to be prioritized anywhere soon. Best regards, Johannes -------------- next part -------------- An HTML attachment was scrubbed... URL: From tzengshinfu at gmail.com Fri Dec 1 01:57:56 2023 From: tzengshinfu at gmail.com (tzengshinfu) Date: Fri, 1 Dec 2023 09:57:56 +0800 Subject: Trailing Commas In-Reply-To: References: Message-ID: Hello folks: I have to admit that trailing commas are quite convenient when it comes to reordering multiple lines of parameters. However, for a single line of parameters, visually, I always tend to feel like the last element has been forgotten (just a personal perspective). Similar situations exist in Javaworld: - JSON strings (supportive) - SQL syntax (unsupported) Perhaps, should this functionality be placed within the IDE rather than the compiler? Using shortcuts to rearrange multiple lines of parameters, could the IDE automatically add or remove commas at the end? (However, if this is the case, this suggestion might veer away from the main topic of our mailing list.)? /* GET BETTER EVERY DAY */ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjohn at xs4all.nl Fri Dec 1 04:44:35 2023 From: hjohn at xs4all.nl (John Hendrikx) Date: Fri, 01 Dec 2023 04:44:35 +0000 Subject: Trailing Commas In-Reply-To: References: Message-ID: ------ Original Message ------ >From "Ron Pressler" To "Johannes Spangenberg" Cc "amber-dev" Date 30/11/2023 20:02:42 Subject Re: Trailing Commas > Additionally, if this option exists, people will want to write or update style guides preferring one style or the other. > I feel this is the most important point in this discussion. When you can offer only one way of doing things, it eliminates needless discussions and makes code more recognizable. Take `var` -- it has led to many discussions, many variations of when it should or should not be used, how variables should be named when it is used (should they convey more information since the type is now less obvious?)... while the previous status quo (pre-`var`) meant there was only way of doing things, resulting in more recognizable code, more consistent variable naming patterns and far far less discussions. --John From davidalayachew at gmail.com Sun Dec 3 08:04:28 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Sun, 3 Dec 2023 03:04:28 -0500 Subject: Want some official clarification on a quirk about pattern-matching for instanceof In-Reply-To: <724c7af7-6710-415f-bd0d-624c3e565cd2@gmail.com> References: <724c7af7-6710-415f-bd0d-624c3e565cd2@gmail.com> Message-ID: Hello Cay, Thank you for your response! I certainly can agree that the advice given by Schildt, putting aside whether or not it is correct, is simply not helpful to the reader at all. What benefit does one get from mixing & and pattern-matching? That's actually part of what I wanted to know by originally posting this question - I couldn't think of a reason, but wanted to know if there actually was one. Also, it was surprising to hear such harsh words spoken about the author. I have heard this persons praises sung for years, so I am sorry to hear that. Since it looks like I was wrong, I will say as much in the SO thread. If anyone else reading has something they can add on the subject, I would appreciate it! Thank you for your time and help! David Alayachew On Wed, Nov 15, 2023 at 4:55?PM Cay Horstmann wrote: > Here is my unofficial clarification. > > Herbert Schildt is wrong when he says that "the right side of the & will > not necessarily be evaluated". It will be. The difference between && and & > with boolean operands is that & will evaluate both operands, but && will > not evaluate the right operand if the left one is false. > > You are right that in this context, it is plausible to think that iObj > could have been declared. > > Nevertheless, why use the & operator? Herbert Schildt could/should tell > his readers that there is no reason to use & other than with bit patterns. > Admittedly it is legal to use & with boolean operands in the very uncommon > situation of a side effect in the second operand. But that's subtle and may > well be surprising to readers of your code. > > The Java Language Specification lays out rules to trace the scope of > instanceof pattern definitions with && || ! and ?: operators. See > https://docs.oracle.com/javase/specs/jls/se21/html/jls-6.html#jls-6.3. > > There are no rules for & and | operators. I think that's because they were > never intended for boolean logic but only for bit patterns (and perhaps > unfortunately, side effects in boolean conditions). And I wholeheartedly > agree with the decision not to add that complexity to the language rules. > > My advice is to stay away from & and | for boolean operands. They were > meant to fiddle with bits. For sure, don't use instanceof with those > operators. With && and ||, and ! and ?:, the JLS rules are sensible and > unsurprising. > > Cheers, > > Cay > > PS. Many years ago, a C FAQ had this statement ( > https://www.lysator.liu.se/c/c-faq/c-5.html): The cost [of the C standard > document] is $130.00 from ANSI . . .the Annotated ANSI C Standard, with > annotations by Herbert Schildt . . . sells in the U.S. for approximately > $40. It has been suggested that the price differential between this work > and the official standard reflects the value of the annotations. > > > > > > On 15/11/2023 03.34, David Alayachew wrote: > > Bumping this one up since I didn't receive a response. > > > > On Fri, Nov 10, 2023 at 11:40?AM David Alayachew < > davidalayachew at gmail.com > wrote: > > > > Hello Amber Dev Team, > > > > Someone on StackOverflow raised an excellent question about > Pattern-Matching for instanceof, and I would like to get a response from > one of you to include in the answer. Here is the link. > > > > > https://stackoverflow.com/questions/77453336/instanceof-pattern-matching-in-java-not-compiling#77453336 > < > https://stackoverflow.com/questions/77453336/instanceof-pattern-matching-in-java-not-compiling#77453336 > > > > > > To summarize, the book that they were reading (Java: The Complete > Reference, 12th Edition by Herbert Schildt) had the following quote. > > > > -----QUOTE_START---- (with minor modifications for readability) > > > > ```java > > Number myOb = Integer.valueOf(9); > > int count = 10; > > > > // vv---- Conditional AND Operator > > if ( (count < 100) && myOb instanceof Integer iObj) > > { > > > > iObj = count; > > > > } > > ``` > > > > The above fragment compiles because the if block will execute only > when both sides of the && are true. Thus, the use of iObj in the if block > is valid. However, a compilation error will result if you tried to use the > & rather than the &&, as shown below. > > > > ```java > > Number myOb = Integer.valueOf(9); > > int count = 10; > > > > // v----- Bitwise Logical AND Operator > > if ( (count < 100) & myOb instanceof Integer iObj) > > { > > > > iObj = count; > > > > } > > ``` > > > > In this case, the compiler cannot know whether or not iObj will be > in scope in the if block because the right side of the & will not > necessarily be evaluated. > > > > ----QUOTE_END---- > > > > When compiling the second example, it is exactly as the author says, > we get told that the variable may not necessarily be in scope. Here is the > error I get using OpenJDK 22 Early Access. > > > > ```java > > $ java --version > > openjdk 22-ea 2024-03-19 > > OpenJDK Runtime Environment (build 22-ea+20-1570) > > OpenJDK 64-Bit Server VM (build 22-ea+20-1570, mixed mode, sharing) > > > > $ javac --version > > javac 22-ea > > > > $ cat abc.java > > public class abc > > { > > > > > > public static void main(String[] args) > > { > > > > Number myOb = Integer.valueOf(9); > > > > int count = 10; > > > > if ( (count < 100) & myOb instanceof Integer iObj ) > > { > > > > iObj = count; > > > > } > > > > } > > > > } > > > > $ javac abc.java > > abc.java:15: error: cannot find symbol > > iObj = count; > > ^ > > symbol: variable iObj > > location: class abc > > 1 error > > > > ``` > > > > I feel like I have a very good idea of why this might be the case, > but I lack the terminology to put it into words correctly. Could someone > help me out? > > > > Thank you for your time and help! > > David Alayachew > > > > -- > > Cay S. Horstmann | http://horstmann.com | mailto:cay at horstmann.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Sun Dec 3 15:31:30 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Sun, 3 Dec 2023 10:31:30 -0500 Subject: Some thoughts and an idea about Checked Exceptions Message-ID: Hello Amber Dev Team, Here are some thoughts I had about Checked Exceptions, plus an idea. I actually like Checked Exceptions. I think that, when used correctly, they enable an easy to read style of programming that separates the mess from the happy path. I think Checked Exceptions are at their best when only one method of a try block can throw a specific exception. Meaning, there is no overlap between the Checked Exceptions of methodA and methodB. This is great because, then, you can wrap all "Throwable" methods in a single try block, and then each catch has a 1-to-1 mapping with the code that can throw it. Conversely, Checked Exceptions are at their most inconvenient when multiple, consecutive methods can throw the same Checked Exceptions, AND WE WANT TO HANDLE THOSE SAME EXCEPTIONS DIFFERENTLY ACROSS THESE CONSECUTIVE METHODS. In this case, your only real recourse is to handle each task individually with a separate try catch block. For example - let's say I want to make a new folder, create a file in that folder, and then write content to the newly created file. That seems like a reasonable amount of work for a single method. For creating the new folder, we have Files.createDirectories() [1]. It throws the Checked Exception FileAlreadyExistsException. For creating the file and writing content to the newly created file, we have Files.write() [2]. It too throws FileAlreadyExistsException. Now, what if I want to handle the exceptions differently? The simplest use case would be -- to throw a better error message to the user. ```java private Path save(Path parentFolder, byte[] contentToWrite) { try { Files.createDirectories(...); } catch (FileAlreadyExistsException e) { throw new IllegalStateException("helpful error message 1", e); } try { return Files.write(...); } catch (FileAlreadyExistsException e) { throw new IllegalStateException("helpful error message 2", e); } } ``` As a side observation, statement lambdas vs expression lambdas have given me this mental model that blocks are for multiple lines of code while expressions are for one. I know several discussions have been had on try-expressions and whatnot, and I agree that they aren't a good fit. Regardless, having a single method in the try block makes me feel like the noise-to-value ratio is a little high. I can sort of accept it for the catch block, but for try? Annoying. The side observation is relevant, but going back to the main point -- because I want to handle both cases differently, I must make 2 try catch blocks. I think this is at least one of the reasons why some developers dislike Checked Exceptions. Now, the obvious solution is to remove the ambiguity, one way or another. There are a couple of ways to do this. One way is to create a wrapper method that catches and throws a more specific checked exception. Instead of Files.createDirectories(), I create my own Utils.createDirectories() that throws CantCreateDirectoryBecauseFileAlreadyExistsException. Then, I can just catch that specific exception and handle it as expected. But this means writing a whole bunch of utility style methods to work around a lack of specificity that can only be achieved by wrapping individual lines of code in blocks. I will hereby call them micro-blocks. Ignoring the fact that the utility methods just clog up my codebase, they also tend to be easy to misplace or I accidentally make duplicates of them without meaning to. In short, its a whole bunch of low-value code that is easy to forget and only exists to avoid some friction. There are a few other ways, but they involve either writing something resembling micro-blocks, or more indirection, like with the utility methods. Here's my pie-in-the-sky idea. I don't care about syntax. But for now, I will call it Tagged Statements and Tagged Exceptions. ```java private Path save(Path parentFolder, byte[] contentToWrite) { try { #folder Files.createDirectories(...); #file return Files.write(...); } catch (#folder FileAlreadyExistsException e) { throw new IllegalStateException("helpful error message 1", e); } catch (#file FileAlreadyExistsException e) { throw new IllegalStateException("helpful error message 2", e); } } ``` Doing it this way, all ambiguity is gone, while boiling things down to only the code that needs to be there. Plus, this also gives us the benefit of using the code we have (already written). The semantics are simple. * All statements in a method body can be prefixed by a tag -- called a tagged statement. * #, followed by an identifier, followed by whitespace, followed by the statement to be tagged. * All exceptions thrown by the tagged statement can be referenced in catch parameters via a tagged Exception -- an ExceptionType prefixed by the same # identifier. * #, followed by an identifier, followed by whitespace, followed by the ExceptionType to be tagged. * You can't put the # identifier in the middle of a statement (System.out.println(#1 someMethod()) <---- invalid). And the best part is, this blends in nicely with existing semantics. If you have a catch block with no tagged catch parameters, then it works the way that it always has. But if you want to specify, then use a tagged exception. If you want to handle multiple types of exceptions using the "|" symbol, that logic works exactly for tagged exceptions too. You can even mix and match them. Again, I don't care about syntax. I care about the fact that this is something you can do at the call site ad-hoc. The part that I like the most about this is that it actually makes try-catch way more attractive. Obviously, if I am trying to do control flow, then try catch is still not the right vehicle (and if I still must, then it should really be handled in its own try catch block or a separate method). But now, all those errors that I didn't really want to specify or build around becomes really easy to do. I just add an inline signifier, then a matching catch block. The only hit to readability is the prefix. You can make it verbose if you like (#recoverable) or terse (#1). As a potential bonus, it might be a good idea to allow several different statements to have the same # prefix. Meaning, methodA, methodC, and methodE all have #1, but methodB and methodD have #2. I am indifferent to this, and I am fine leaving it out. Another benefit is that it allows you to handle all Exceptions from that particular join point the same. Let's say there is a method call in your method that all failures it has can be handled the same. Simply attach a prefix to it (#blah) and then make a catch (#blah Exception e) or something similar. I would also add a warning if a method has a tagged statement that is not explicitly referenced by a catch block. Catch parameters must spell out the tag explicitly to count as an explicit reference. Now, this solution doesn't solve the "bigger" problems (some would say) with Checked Exceptions (Streams/Lambdas + Checked Exceptions). But I think it makes it makes Checked Exceptions and try catch blocks (both good things that we should be making better use of) extremely ergonomic and easy to handle. Thoughts? Thank you for your time! David Alayachew [1]= https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/nio/file/Files.html#createDirectories(java.nio.file.Path,java.nio.file.attribute.FileAttribute.. .) [2]= https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/nio/file/Files.html#write(java.nio.file.Path,byte%5B%5D,java.nio.file.OpenOption.. .) -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjohn at xs4all.nl Sun Dec 3 16:21:47 2023 From: hjohn at xs4all.nl (John Hendrikx) Date: Sun, 03 Dec 2023 16:21:47 +0000 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: References: Message-ID: How about: try { // create directory try { // create file } catch (FileAlreadyExistsException e) { throw new IllegalStateException("helpful error message 2", e); } } catch (FileAlreadyExistsException e) { throw new IllegalStateException("helpful error message 1", e); } Extract them as functions to make it nicer. More often though I imagine that it either does not matter much to the user (the action failed), or in this specific example that the directory already existing is not a failure condition. --John ------ Original Message ------ >From "David Alayachew" To "amber-dev" Date 03/12/2023 16:31:30 Subject Some thoughts and an idea about Checked Exceptions >Hello Amber Dev Team, > >Here are some thoughts I had about Checked Exceptions, plus an idea. > >I actually like Checked Exceptions. I think that, when used correctly, >they enable an easy to read style of programming that separates the >mess from the happy path. > >I think Checked Exceptions are at their best when only one method of a >try block can throw a specific exception. Meaning, there is no overlap >between the Checked Exceptions of methodA and methodB. This is great >because, then, you can wrap all "Throwable" methods in a single try >block, and then each catch has a 1-to-1 mapping with the code that can >throw it. > >Conversely, Checked Exceptions are at their most inconvenient when >multiple, consecutive methods can throw the same Checked Exceptions, >AND WE WANT TO HANDLE THOSE SAME EXCEPTIONS DIFFERENTLY ACROSS THESE >CONSECUTIVE METHODS. In this case, your only real recourse is to handle >each task individually with a separate try catch block. > >For example - let's say I want to make a new folder, create a file in >that folder, and then write content to the newly created file. That >seems like a reasonable amount of work for a single method. > >For creating the new folder, we have Files.createDirectories() [1]. It >throws the Checked Exception FileAlreadyExistsException. > >For creating the file and writing content to the newly created file, we >have Files.write() [2]. It too throws FileAlreadyExistsException. > >Now, what if I want to handle the exceptions differently? The simplest >use case would be -- to throw a better error message to the user. > >```java >private Path save(Path parentFolder, byte[] contentToWrite) >{ > > try > { > Files.createDirectories(...); > } > > catch (FileAlreadyExistsException e) > { > throw new IllegalStateException("helpful error message 1", e); > } > > try > { > return Files.write(...); > } > > catch (FileAlreadyExistsException e) > { > throw new IllegalStateException("helpful error message 2", e); > } > >} >``` > >As a side observation, statement lambdas vs expression lambdas have >given me this mental model that blocks are for multiple lines of code >while expressions are for one. I know several discussions have been had >on try-expressions and whatnot, and I agree that they aren't a good >fit. Regardless, having a single method in the try block makes me feel >like the noise-to-value ratio is a little high. I can sort of accept it >for the catch block, but for try? Annoying. > >The side observation is relevant, but going back to the main point -- >because I want to handle both cases differently, I must make 2 try >catch blocks. I think this is at least one of the reasons why some >developers dislike Checked Exceptions. > >Now, the obvious solution is to remove the ambiguity, one way or >another. There are a couple of ways to do this. > >One way is to create a wrapper method that catches and throws a more >specific checked exception. Instead of Files.createDirectories(), I >create my own Utils.createDirectories() that throws >CantCreateDirectoryBecauseFileAlreadyExistsException. Then, I can just >catch that specific exception and handle it as expected. > >But this means writing a whole bunch of utility style methods to work >around a lack of specificity that can only be achieved by wrapping >individual lines of code in blocks. I will hereby call them >micro-blocks. Ignoring the fact that the utility methods just clog up >my codebase, they also tend to be easy to misplace or I accidentally >make duplicates of them without meaning to. In short, its a whole bunch >of low-value code that is easy to forget and only exists to avoid some >friction. > >There are a few other ways, but they involve either writing something >resembling micro-blocks, or more indirection, like with the utility >methods. > >Here's my pie-in-the-sky idea. I don't care about syntax. But for now, >I will call it Tagged Statements and Tagged Exceptions. > >```java >private Path save(Path parentFolder, byte[] contentToWrite) >{ > > try > { > #folder Files.createDirectories(...); > #file return Files.write(...); > } > catch (#folder FileAlreadyExistsException e) > { > throw new IllegalStateException("helpful error message 1", e); > } > catch (#file FileAlreadyExistsException e) > { > throw new IllegalStateException("helpful error message 2", e); > } > >} >``` > >Doing it this way, all ambiguity is gone, while boiling things down to >only the code that needs to be there. Plus, this also gives us the >benefit of using the code we have (already written). > >The semantics are simple. > >* All statements in a method body can be prefixed by a tag -- called a >tagged statement. > > * #, followed by an identifier, followed by whitespace, followed by >the statement to be tagged. > >* All exceptions thrown by the tagged statement can be referenced in >catch parameters via a tagged Exception -- an ExceptionType prefixed by >the same # identifier. > > * #, followed by an identifier, followed by whitespace, followed by >the ExceptionType to be tagged. > >* You can't put the # identifier in the middle of a statement >(System.out.println(#1 someMethod()) <---- invalid). > >And the best part is, this blends in nicely with existing semantics. If >you have a catch block with no tagged catch parameters, then it works >the way that it always has. But if you want to specify, then use a >tagged exception. If you want to handle multiple types of exceptions >using the "|" symbol, that logic works exactly for tagged exceptions >too. You can even mix and match them. Again, I don't care about syntax. >I care about the fact that this is something you can do at the call >site ad-hoc. > >The part that I like the most about this is that it actually makes >try-catch way more attractive. Obviously, if I am trying to do control >flow, then try catch is still not the right vehicle (and if I still >must, then it should really be handled in its own try catch block or a >separate method). But now, all those errors that I didn't really want >to specify or build around becomes really easy to do. I just add an >inline signifier, then a matching catch block. The only hit to >readability is the prefix. You can make it verbose if you like >(#recoverable) or terse (#1). > >As a potential bonus, it might be a good idea to allow several >different statements to have the same # prefix. Meaning, methodA, >methodC, and methodE all have #1, but methodB and methodD have #2. I am >indifferent to this, and I am fine leaving it out. > >Another benefit is that it allows you to handle all Exceptions from >that particular join point the same. Let's say there is a method call >in your method that all failures it has can be handled the same. Simply >attach a prefix to it (#blah) and then make a catch (#blah Exception e) >or something similar. > >I would also add a warning if a method has a tagged statement that is >not explicitly referenced by a catch block. Catch parameters must spell >out the tag explicitly to count as an explicit reference. > >Now, this solution doesn't solve the "bigger" problems (some would say) >with Checked Exceptions (Streams/Lambdas + Checked Exceptions). But I >think it makes it makes Checked Exceptions and try catch blocks (both >good things that we should be making better use of) extremely ergonomic >and easy to handle. > >Thoughts? > >Thank you for your time! >David Alayachew > >[1]=https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/nio/file/Files.html#createDirectories(java.nio.file.Path,java.nio.file.attribute.FileAttribute...) >[2]=https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/nio/file/Files.html#write(java.nio.file.Path,byte%5B%5D,java.nio.file.OpenOption...) -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Sun Dec 3 17:22:56 2023 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Sun, 3 Dec 2023 11:22:56 -0600 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: References: Message-ID: On Sun, Dec 3, 2023 at 11:01?AM David Alayachew wrote: > Here are some thoughts I had about Checked Exceptions, plus an idea. > Some random thoughts/opinions on this idea... Let's take the big picture view. Writing Java source code is basically a serialization exercise: in your head you have some complicated, tree-like control flow & logic data structure but to express that in Java you have to convert it into a linear sequence of UTF-16 codes. So in a sense the real "problem" you are addressing is that we're trying to jam a tree/graph data structure into a linear sequence. As a result, there's never going to be a "pretty" way to do it, only different ways of doing it adequately. In other words, the best we can hope for is something well-defined and unambiguous that everyone can understand and agree on, and then get on with our lives. And that's what we already have. Could the Java language be "compressed" in various ways so that we have to type fewer characters? Yes! There are lots of ways to do that, including your suggestion. Another idea is that we could replace "class", "interface", and "protected' with "cls", "infc", and "prtd". But where do you stop? And what is your criteria for stopping? The philosophy behind the design of Java (as far as I can infer it) is to prioritize logical clarity, not brevity. So we stop when we get something that is reasonably adequate. As has been said before: "If what you want is a language for algorithm compression, try Perl" :) -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Sun Dec 3 18:30:18 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Sun, 3 Dec 2023 13:30:18 -0500 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: References: Message-ID: Hello John, Thank you for your response! > Extract them as functions to make it nicer. Yes, this is what I mentioned in my original post. Ultimately, that is the solution we are at, but my real world code runs into readability problems the more I rely on that. I am bringing all of this up because I feel like that is an unbalanced cost for the right answer. > More often though I imagine that it either > does not matter much to the user (the action > failed), or in this specific example that the > directory already existing is not a failure > condition. Oh this is a super simplified example of a real problem I run into at work. The most recent example from work involves me calling 12 different services to fetch data models. There are some failures worth failing on, others we can try and redeem, and others that are non-issues. Handling each one individually with a try-catch block takes a lot of effort, and oftentimes, makes the logic more spread out and difficult to follow. I am not asking for conciseness, I am saying my current solution is fairly difficult to read, and I'd like that to change. I am proposing one solution. On Sun, Dec 3, 2023 at 11:21?AM John Hendrikx wrote: > How about: > > try { > // create directory > try { > // create file > } > catch (FileAlreadyExistsException e) { > throw new IllegalStateException("helpful error message 2", > e); > } > } > catch (FileAlreadyExistsException e) { > throw new IllegalStateException("helpful error message 1", e); > } > > Extract them as functions to make it nicer. > > More often though I imagine that it either does not matter much to the > user (the action failed), or in this specific example that the directory > already existing is not a failure condition. > > --John > > ------ Original Message ------ > From "David Alayachew" > To "amber-dev" > Date 03/12/2023 16:31:30 > Subject Some thoughts and an idea about Checked Exceptions > > Hello Amber Dev Team, > > Here are some thoughts I had about Checked Exceptions, plus an idea. > > I actually like Checked Exceptions. I think that, when used correctly, > they enable an easy to read style of programming that separates the mess > from the happy path. > > I think Checked Exceptions are at their best when only one method of a try > block can throw a specific exception. Meaning, there is no overlap between > the Checked Exceptions of methodA and methodB. This is great because, then, > you can wrap all "Throwable" methods in a single try block, and then each > catch has a 1-to-1 mapping with the code that can throw it. > > Conversely, Checked Exceptions are at their most inconvenient when > multiple, consecutive methods can throw the same Checked Exceptions, AND WE > WANT TO HANDLE THOSE SAME EXCEPTIONS DIFFERENTLY ACROSS THESE CONSECUTIVE > METHODS. In this case, your only real recourse is to handle each task > individually with a separate try catch block. > > For example - let's say I want to make a new folder, create a file in that > folder, and then write content to the newly created file. That seems like a > reasonable amount of work for a single method. > > For creating the new folder, we have Files.createDirectories() [1]. It > throws the Checked Exception FileAlreadyExistsException. > > For creating the file and writing content to the newly created file, we > have Files.write() [2]. It too throws FileAlreadyExistsException. > > Now, what if I want to handle the exceptions differently? The simplest use > case would be -- to throw a better error message to the user. > > ```java > private Path save(Path parentFolder, byte[] contentToWrite) > { > > try > { > Files.createDirectories(...); > } > > catch (FileAlreadyExistsException e) > { > throw new IllegalStateException("helpful error message 1", e); > } > > try > { > return Files.write(...); > } > > catch (FileAlreadyExistsException e) > { > throw new IllegalStateException("helpful error message 2", e); > } > > } > ``` > > As a side observation, statement lambdas vs expression lambdas have given > me this mental model that blocks are for multiple lines of code while > expressions are for one. I know several discussions have been had on > try-expressions and whatnot, and I agree that they aren't a good fit. > Regardless, having a single method in the try block makes me feel like the > noise-to-value ratio is a little high. I can sort of accept it for the > catch block, but for try? Annoying. > > The side observation is relevant, but going back to the main point -- > because I want to handle both cases differently, I must make 2 try catch > blocks. I think this is at least one of the reasons why some developers > dislike Checked Exceptions. > > Now, the obvious solution is to remove the ambiguity, one way or another. > There are a couple of ways to do this. > > One way is to create a wrapper method that catches and throws a more > specific checked exception. Instead of Files.createDirectories(), I create > my own Utils.createDirectories() that throws > CantCreateDirectoryBecauseFileAlreadyExistsException. Then, I can just > catch that specific exception and handle it as expected. > > But this means writing a whole bunch of utility style methods to work > around a lack of specificity that can only be achieved by wrapping > individual lines of code in blocks. I will hereby call them micro-blocks. > Ignoring the fact that the utility methods just clog up my codebase, they > also tend to be easy to misplace or I accidentally make duplicates of them > without meaning to. In short, its a whole bunch of low-value code that is > easy to forget and only exists to avoid some friction. > > There are a few other ways, but they involve either writing something > resembling micro-blocks, or more indirection, like with the utility methods. > > Here's my pie-in-the-sky idea. I don't care about syntax. But for now, I > will call it Tagged Statements and Tagged Exceptions. > > ```java > private Path save(Path parentFolder, byte[] contentToWrite) > { > > try > { > #folder Files.createDirectories(...); > #file return Files.write(...); > } > catch (#folder FileAlreadyExistsException e) > { > throw new IllegalStateException("helpful error message 1", e); > } > catch (#file FileAlreadyExistsException e) > { > throw new IllegalStateException("helpful error message 2", e); > } > > } > ``` > > Doing it this way, all ambiguity is gone, while boiling things down to > only the code that needs to be there. Plus, this also gives us the benefit > of using the code we have (already written). > > The semantics are simple. > > * All statements in a method body can be prefixed by a tag -- called a > tagged statement. > > * #, followed by an identifier, followed by whitespace, followed by > the statement to be tagged. > > * All exceptions thrown by the tagged statement can be referenced in catch > parameters via a tagged Exception -- an ExceptionType prefixed by the same > # identifier. > > * #, followed by an identifier, followed by whitespace, followed by > the ExceptionType to be tagged. > > * You can't put the # identifier in the middle of a statement > (System.out.println(#1 someMethod()) <---- invalid). > > And the best part is, this blends in nicely with existing semantics. If > you have a catch block with no tagged catch parameters, then it works the > way that it always has. But if you want to specify, then use a tagged > exception. If you want to handle multiple types of exceptions using the "|" > symbol, that logic works exactly for tagged exceptions too. You can even > mix and match them. Again, I don't care about syntax. I care about the fact > that this is something you can do at the call site ad-hoc. > > The part that I like the most about this is that it actually makes > try-catch way more attractive. Obviously, if I am trying to do control > flow, then try catch is still not the right vehicle (and if I still must, > then it should really be handled in its own try catch block or a separate > method). But now, all those errors that I didn't really want to specify or > build around becomes really easy to do. I just add an inline signifier, > then a matching catch block. The only hit to readability is the prefix. You > can make it verbose if you like (#recoverable) or terse (#1). > > As a potential bonus, it might be a good idea to allow several different > statements to have the same # prefix. Meaning, methodA, methodC, and > methodE all have #1, but methodB and methodD have #2. I am indifferent to > this, and I am fine leaving it out. > > Another benefit is that it allows you to handle all Exceptions from that > particular join point the same. Let's say there is a method call in your > method that all failures it has can be handled the same. Simply attach a > prefix to it (#blah) and then make a catch (#blah Exception e) or something > similar. > > I would also add a warning if a method has a tagged statement that is not > explicitly referenced by a catch block. Catch parameters must spell out the > tag explicitly to count as an explicit reference. > > Now, this solution doesn't solve the "bigger" problems (some would say) > with Checked Exceptions (Streams/Lambdas + Checked Exceptions). But I think > it makes it makes Checked Exceptions and try catch blocks (both good things > that we should be making better use of) extremely ergonomic and easy to > handle. > > Thoughts? > > Thank you for your time! > David Alayachew > > [1]= > https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/nio/file/Files.html#createDirectories(java.nio.file.Path,java.nio.file.attribute.FileAttribute.. > .) > [2]= > https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/nio/file/Files.html#write(java.nio.file.Path,byte%5B%5D,java.nio.file.OpenOption.. > .) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Sun Dec 3 19:13:33 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Sun, 3 Dec 2023 14:13:33 -0500 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: References: Message-ID: Hello Archie, Thank you for your response! > complicated, tree-like control flow & logic data structure > ... > in Java you have to convert it into a linear sequence > ... > So in a sense the real "problem" you are addressing is > that we're trying to jam a tree/graph data structure into > a linear sequence. I actually addressed this in my original post. Long story short, my solution is not meant to take an existing construct and replace it with something simpler or more concise. It's meant to let 2 or more statements that throw the same exception to be handled in different ways while staying in the same scope. It's meant to communicate a clear intent and demonstrate a relationship between 2 statements that otherwise is not immediately obvious. It's like an if-else statement vs a ternary operator. The fact that it is more concise is a side effect. Furthermore, nothing is being replaced here. If you are forced to make a logical control flow decision based off of an exception being thrown (fetch data model A, but if exception, then use default data model), then my solution is not something you should be using. The existing try catch blocks and the ways we use it meet that need. John's example above is a good demonstration of how to handle control flow via exceptions, if we are forced to make logical control flow decisions based on exception results. > Could the Java language be "compressed" in various ways > so that we have to type fewer characters? I have 0 desire to make this language more concise, especially when we are talking about exceptions. Exceptions are where I want things to be clear and obvious. I am making this post because my current codebase is doing exactly what you and John are suggesting, and our readability is suffering because of it. I am trying to find a way to address that, and this just happens to be the solution I landed upon. I should also add -- I landed on this solution by thinking about how best to communicate a relationship. 2 statements throw the same exception, but should be handled in different ways. The happy path control flow does not depend on the exception, we just want to handle the failure and deal with it in a way that communicates how statements relate to each other. How best to accomplish that? Well, we use blocks in Java to communicate relationships between statements. The try block gives us one for free. And we use catch blocks to communicate exceptional control flow. It is meant to truly be exceptional, so we should avoid putting logical control flow there unless necessary. And if necessary, best that we wrap the whole statement in a single try catch block. But what about the remainders? Not all exceptions touch control flow, at least not in any meaningful way. Sometimes, we just want to "fail more gracefully". And indeed, we should strive for that. One of the best examples of failing gracefully is to wrap an exception with a more helpful exception that communicates the problem better. And if we can communicate relationships between failures more easily, then our code is all the better for it. > The philosophy behind the design of Java (as far as I can > infer it) is to prioritize logical clarity, not brevity. > So we stop when we get something that is reasonably > adequate. Well by that logic, this post I made is necessary. The existing solution is certainly not adequate for me because logical clarity is stretched and strained across multiple block scopes that have nothing communicating how they relate to each other. I see one way to make things adequate, so I am making a post. And I think it is a good solution because it blends in well with what Java is already doing in other contexts, let alone for exceptions. Again, I consider this solution akin to an if statement vs a ternary operator. Thank you both for your time! David Alayachew On Sun, Dec 3, 2023 at 12:23?PM Archie Cobbs wrote: > On Sun, Dec 3, 2023 at 11:01?AM David Alayachew > wrote: > >> Here are some thoughts I had about Checked Exceptions, plus an idea. >> > > Some random thoughts/opinions on this idea... > > Let's take the big picture view. Writing Java source code is basically a > serialization exercise: in your head you have some complicated, tree-like > control flow & logic data structure but to express that in Java you have to > convert it into a linear sequence of UTF-16 codes. > > So in a sense the real "problem" you are addressing is that we're trying > to jam a tree/graph data structure into a linear sequence. As a result, > there's never going to be a "pretty" way to do it, only different ways of > doing it adequately. > > In other words, the best we can hope for is something well-defined and > unambiguous that everyone can understand and agree on, and then get on with > our lives. And that's what we already have. > > Could the Java language be "compressed" in various ways so that we have to > type fewer characters? Yes! There are lots of ways to do that, including > your suggestion. Another idea is that we could replace "class", > "interface", and "protected' with "cls", "infc", and "prtd". But where do > you stop? And what is your criteria for stopping? > > The philosophy behind the design of Java (as far as I can infer it) is to > prioritize logical clarity, not brevity. So we stop when we get something > that is reasonably adequate. > > As has been said before: "If what you want is a language for algorithm > compression, try Perl" :) > > -Archie > > -- > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Mon Dec 4 00:52:25 2023 From: brian.goetz at oracle.com (Brian Goetz) Date: Sun, 3 Dec 2023 19:52:25 -0500 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: References: Message-ID: <525331b8-5a56-4544-b2d3-7127a9b2c9ac@oracle.com> > > I actually like Checked Exceptions. I think that, when used correctly, > they enable an easy to read style of programming that separates the > mess from the happy path. This is an important point; for all the folks out there who love to thump the table with "Checked exceptions were a failed experiment", there are plenty of people who see value in them quietly getting work don. > I think Checked Exceptions are at their best when only one method of a > try block can throw a specific exception. Meaning, there is no overlap > between the Checked Exceptions of methodA and methodB. This is great > because, then, you can wrap all "Throwable" methods in a single try > block, and then each catch has a 1-to-1 mapping with the code that can > throw it. I know what you mean, but I think there are several moving parts here.? There is the checked-vs-unchecked dimension (which is a declaration-site property), which ideally is about whether the exception has a reasonably forseeable recovery. (FileNotFoundException is recoverable -- you can prompt the user for another file name, whereas getting an IOException on close() is not recoverable -- what are you going to do, close it again?)? So checked exceptions are best when they are signalling something that a user _wants_ to catch so they can try something else, and unchecked exceptions are better when there is no forseeable recovery other than log it, cancel the current unit of work, and then either exit or go back to the main event loop. The point you raise is really more about the `try` statement than checked exceptions themselves; the main body of a `try` is a block statement.? The block might do IO in a dozen places, but most of the time, we want to treat them all as "some IO operation in this block failed"; it is rare that we want to separate failure on a write from failure on a close.? Of course, there are "exceptions" to every rule.? Catch was also later extended to let you handle multiple exceptions with the same handler (catch IOE|SQLE), which further fits into the "aggregation" aspect of try-catch. The proposal you make, which basically allows users to associate invocation context with a region of code that is attached to any exceptions thrown from that region, is interesting but likely too specialized to be broadly useful.? Attaching metadata to exceptions is a rich and useful vein of ideas (including attaching context information useful in debugging or diagnosing test failure), but this one seems at the narrow end of that vein.? But, if you look at catch clauses as pattern matches, whcih currently are restricted to the type of the exception, there is much room to refine the specificity of such patterns. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Mon Dec 4 01:37:47 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Sun, 3 Dec 2023 20:37:47 -0500 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: <525331b8-5a56-4544-b2d3-7127a9b2c9ac@oracle.com> References: <525331b8-5a56-4544-b2d3-7127a9b2c9ac@oracle.com> Message-ID: Hello Brian, Thank you for your response! > > I actually like Checked Exceptions. I think that, when > > used correctly, they enable an easy to read style of > > programming that separates the mess from the happy > > path. > > This is an important point; for all the folks out there > who love to thump the table with "Checked exceptions were > a failed experiment", there are plenty of people who see > value in them quietly getting work done. In their defense, it took me a long time to "see the light" too. Long story short, I had the great pleasure to work on a codebase at work that actually treats its Exceptions like first class citizens. Using my toy example from above, we might have had the directory creation Exception have a DirectoryNameClashException, which has subtypes clarifying the different types of name clash (clash with another directory, vs clash with a file on the rare OS that cares about that). So, you can use the Exception type to decide what to fail on and what to recover on. > > I think Checked Exceptions are at their best when only > > one method of a try block can throw a specific > > exception. Meaning, there is no overlap between the > > Checked Exceptions of methodA and methodB. This is > > great because, then, you can wrap all "Throwable" > > methods in a single try block, and then each catch has > > a 1-to-1 mapping with the code that can throw it. > > I know what you mean, but I think there are several > moving parts here. There is the checked-vs-unchecked > dimension (which is a declaration-site property), which > ideally is about whether the exception has a reasonably > forseeable recovery. (FileNotFoundException is > recoverable -- you can prompt the user for another file > name, whereas getting an IOException on close() is not > recoverable -- what are you going to do, close it again?) > ... > The point you raise is really more about the `try` > statement than checked exceptions themselves Thanks for helping translate my thoughts. Yes, try is really the target here. I was too focused on my specific use case to notice that at the time. > So checked exceptions are best when they are signalling > something that a user _wants_ to catch so they can try > something else, and unchecked exceptions are better when > there is no forseeable recovery other than log it, cancel > the current unit of work, and then either exit or go back > to the main event loop. I appreciate you pointing this out. This is a way better mental model than I had. > The proposal you make, which basically allows users to > associate invocation context with a region of code that > is attached to any exceptions thrown from that region, is > interesting but likely too specialized to be broadly > useful. Attaching metadata to exceptions is a rich and > useful vein of ideas (including attaching context > information useful in debugging or diagnosing test > failure), but this one seems at the narrow end of that > vein. But, if you look at catch clauses as pattern > matches, which currently are restricted to the type of > the exception, there is much room to refine the > specificity of such patterns. WOW - the pattern matching well is bottom less. I feel like the idea was mentioned before, but I never considered it like this. So, we could have the same, basic exception that we always had, but could now enhance the Exception to include metadata that we can now unbox with pattern-matching to get a more specific cause? If so, that's a way better idea than mine. All existing Exception infrastructure stays as is, but underneath the surface, we can enhance the error throwing logic to include better detail. The example I gave above for FileAlreadyExistsException could include the offending path. Then, rather than having an Exception for clashing with a file and another for clashing with a directory, I can just have a clash exception that has a field to represent what I am clashing with, extract it via Pattern Matching, and maybe there is even a nested pattern on that object that tells me whether it is a file or directory. It would mean we are leaning harder on the contracts of methods, but that actually sounds like a good thing. For example, in the past we would just say "if this failure happens, it will throw ExceptionA". Now, we might want to specify what metadata will be included in ExceptionA, so that users know what to extract, in case it isn't obvious. In that case, I'll put my current idea to rest, since I see that Pattern Matching pretty much eclipses it in almost every single way. Hope to see this vein of pattern matching see the light of day soon! Thank you for your time and help! David Alayachew On Sun, Dec 3, 2023 at 7:52?PM Brian Goetz wrote: > > > I actually like Checked Exceptions. I think that, when used correctly, > they enable an easy to read style of programming that separates the mess > from the happy path. > > > This is an important point; for all the folks out there who love to thump > the table with "Checked exceptions were a failed experiment", there are > plenty of people who see value in them quietly getting work don. > > I think Checked Exceptions are at their best when only one method of a try > block can throw a specific exception. Meaning, there is no overlap between > the Checked Exceptions of methodA and methodB. This is great because, then, > you can wrap all "Throwable" methods in a single try block, and then each > catch has a 1-to-1 mapping with the code that can throw it. > > > I know what you mean, but I think there are several moving parts here. > There is the checked-vs-unchecked dimension (which is a declaration-site > property), which ideally is about whether the exception has a reasonably > forseeable recovery. (FileNotFoundException is recoverable -- you can > prompt the user for another file name, whereas getting an IOException on > close() is not recoverable -- what are you going to do, close it again?) > So checked exceptions are best when they are signalling something that a > user _wants_ to catch so they can try something else, and unchecked > exceptions are better when there is no forseeable recovery other than log > it, cancel the current unit of work, and then either exit or go back to the > main event loop. > > The point you raise is really more about the `try` statement than checked > exceptions themselves; the main body of a `try` is a block statement. The > block might do IO in a dozen places, but most of the time, we want to treat > them all as "some IO operation in this block failed"; it is rare that we > want to separate failure on a write from failure on a close. Of course, > there are "exceptions" to every rule. Catch was also later extended to let > you handle multiple exceptions with the same handler (catch IOE|SQLE), > which further fits into the "aggregation" aspect of try-catch. > > The proposal you make, which basically allows users to associate > invocation context with a region of code that is attached to any exceptions > thrown from that region, is interesting but likely too specialized to be > broadly useful. Attaching metadata to exceptions is a rich and useful vein > of ideas (including attaching context information useful in debugging or > diagnosing test failure), but this one seems at the narrow end of that > vein. But, if you look at catch clauses as pattern matches, whcih > currently are restricted to the type of the exception, there is much room > to refine the specificity of such patterns. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From numeralnathan at gmail.com Mon Dec 4 15:57:57 2023 From: numeralnathan at gmail.com (Nathan Reynolds) Date: Mon, 4 Dec 2023 07:57:57 -0800 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: <525331b8-5a56-4544-b2d3-7127a9b2c9ac@oracle.com> References: <525331b8-5a56-4544-b2d3-7127a9b2c9ac@oracle.com> Message-ID: Brian, Thank you for explaining when to use checked and unchecked exceptions. I've wondered a while about this. When do I use RuntimeException versus Error? Do I use RuntimeException for when the current task needs to exit? Do I use Error when the current program needs to exit? On Sun, Dec 3, 2023 at 7:01?PM Brian Goetz wrote: > > > I actually like Checked Exceptions. I think that, when used correctly, > they enable an easy to read style of programming that separates the mess > from the happy path. > > > This is an important point; for all the folks out there who love to thump > the table with "Checked exceptions were a failed experiment", there are > plenty of people who see value in them quietly getting work don. > > I think Checked Exceptions are at their best when only one method of a try > block can throw a specific exception. Meaning, there is no overlap between > the Checked Exceptions of methodA and methodB. This is great because, then, > you can wrap all "Throwable" methods in a single try block, and then each > catch has a 1-to-1 mapping with the code that can throw it. > > > I know what you mean, but I think there are several moving parts here. > There is the checked-vs-unchecked dimension (which is a declaration-site > property), which ideally is about whether the exception has a reasonably > forseeable recovery. (FileNotFoundException is recoverable -- you can > prompt the user for another file name, whereas getting an IOException on > close() is not recoverable -- what are you going to do, close it again?) > So checked exceptions are best when they are signalling something that a > user _wants_ to catch so they can try something else, and unchecked > exceptions are better when there is no forseeable recovery other than log > it, cancel the current unit of work, and then either exit or go back to the > main event loop. > > The point you raise is really more about the `try` statement than checked > exceptions themselves; the main body of a `try` is a block statement. The > block might do IO in a dozen places, but most of the time, we want to treat > them all as "some IO operation in this block failed"; it is rare that we > want to separate failure on a write from failure on a close. Of course, > there are "exceptions" to every rule. Catch was also later extended to let > you handle multiple exceptions with the same handler (catch IOE|SQLE), > which further fits into the "aggregation" aspect of try-catch. > > The proposal you make, which basically allows users to associate > invocation context with a region of code that is attached to any exceptions > thrown from that region, is interesting but likely too specialized to be > broadly useful. Attaching metadata to exceptions is a rich and useful vein > of ideas (including attaching context information useful in debugging or > diagnosing test failure), but this one seems at the narrow end of that > vein. But, if you look at catch clauses as pattern matches, whcih > currently are restricted to the type of the exception, there is much room > to refine the specificity of such patterns. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Mon Dec 4 16:31:52 2023 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 4 Dec 2023 11:31:52 -0500 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: References: <525331b8-5a56-4544-b2d3-7127a9b2c9ac@oracle.com> Message-ID: <5db98a41-331e-4da1-9ca8-693ff2d5c8a9@oracle.com> A good rule of thumb here is: if you're not the JVM runtime, don't think about extending Error. (But there are plenty of counterexamples to muddy these waters, such as javax.xml.parsers.FactoryConfigurationError.) The rubric you outline is good in theory, except for the fact that it is hard to know a development time whether a condition is really unrecoverable or not (unless you're the JVM runtime.) On 12/4/2023 10:57 AM, Nathan Reynolds wrote: > Brian, > > Thank you for explaining when to use checked and unchecked > exceptions.? I've wondered a while about this.? When do I use > RuntimeException versus Error?? Do I use RuntimeException for when the > current task needs to exit?? Do I use Error when the current program > needs to exit? > > On Sun, Dec 3, 2023 at 7:01?PM Brian Goetz wrote: > > >> >> I actually like Checked Exceptions. I think that, when used >> correctly, they enable an easy to read style of programming that >> separates the mess from the happy path. > > This is an important point; for all the folks out there who love > to thump the table with "Checked exceptions were a failed > experiment", there are plenty of people who see value in them > quietly getting work don. > >> I think Checked Exceptions are at their best when only one method >> of a try block can throw a specific exception. Meaning, there is >> no overlap between the Checked Exceptions of methodA and methodB. >> This is great because, then, you can wrap all "Throwable" methods >> in a single try block, and then each catch has a 1-to-1 mapping >> with the code that can throw it. > > I know what you mean, but I think there are several moving parts > here.? There is the checked-vs-unchecked dimension (which is a > declaration-site property), which ideally is about whether the > exception has a reasonably forseeable recovery.? > (FileNotFoundException is recoverable -- you can prompt the user > for another file name, whereas getting an IOException on close() > is not recoverable -- what are you going to do, close it again?)? > So checked exceptions are best when they are signalling something > that a user _wants_ to catch so they can try something else, and > unchecked exceptions are better when there is no forseeable > recovery other than log it, cancel the current unit of work, and > then either exit or go back to the main event loop. > > The point you raise is really more about the `try` statement than > checked exceptions themselves; the main body of a `try` is a block > statement.? The block might do IO in a dozen places, but most of > the time, we want to treat them all as "some IO operation in this > block failed"; it is rare that we want to separate failure on a > write from failure on a close.? Of course, there are "exceptions" > to every rule. Catch was also later extended to let you handle > multiple exceptions with the same handler (catch IOE|SQLE), which > further fits into the "aggregation" aspect of try-catch. > > The proposal you make, which basically allows users to associate > invocation context with a region of code that is attached to any > exceptions thrown from that region, is interesting but likely too > specialized to be broadly useful.? Attaching metadata to > exceptions is a rich and useful vein of ideas (including attaching > context information useful in debugging or diagnosing test > failure), but this one seems at the narrow end of that vein.? But, > if you look at catch clauses as pattern matches, whcih currently > are restricted to the type of the exception, there is much room to > refine the specificity of such patterns. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Mon Dec 4 16:40:55 2023 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 4 Dec 2023 11:40:55 -0500 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: References: <525331b8-5a56-4544-b2d3-7127a9b2c9ac@oracle.com> Message-ID: > > > The proposal you make, which basically allows users to > > associate invocation context with a region of code that > > is attached to any exceptions thrown from that region, is > > interesting but likely too specialized to be broadly > > useful. Attaching metadata to exceptions is a rich and > > useful vein of ideas (including attaching context > > information useful in debugging or diagnosing test > > failure), but this one seems at the narrow end of that > > vein. But, if you look at catch clauses as pattern > > matches, which currently are restricted to the type of > > the exception, there is much room to refine the > > specificity of such patterns. > > WOW - the pattern matching well is bottom less. Here are some relatively-unformed thoughts on how `catch` clauses could align with patterns. ??? catch (IOException e) { ... } This can be interpreted cleanly as a type pattern already, with the exception thrown as the match candidate. Just as we have alternate constructors for most exceptions for wrapping: ??? FooException() ??? FooException(String) ??? FooException(Throwable) ??? FooException(String, Throwable) we can similarly have matching deconstructors in the exception classes, meaning we could use ordinary nested patterns to detect wrapped exceptions: ??? catch (RuntimeException(IOException e)) { ... } If needed, guards can be introduced into catch clauses as they were in switch cases, with basically the exact same set of rules for dominance/exhaustiveness/flow analysis: ??? catch (SqlException e) when e.getErrorCode() == 666 { ... } Catch clauses currently have an ad-hoc syntax for union types which is not currently supported by type patterns, so there would have to be some reconciliation there. (Usual disclaimer: this stuff is all off in the future, not currently on the plate, deep-diving on the design now is now probably counterproductive.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Mon Dec 4 20:41:17 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Mon, 4 Dec 2023 15:41:17 -0500 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: References: <525331b8-5a56-4544-b2d3-7127a9b2c9ac@oracle.com> Message-ID: Hello Brian, Thanks for the rundown, I really appreciate it! I'll avoid digging further like you said, but the future looks bright for java. Thanks again for the peek into the potential future! On Mon, Dec 4, 2023, 11:42 AM Brian Goetz wrote: > > > > The proposal you make, which basically allows users to > > associate invocation context with a region of code that > > is attached to any exceptions thrown from that region, is > > interesting but likely too specialized to be broadly > > useful. Attaching metadata to exceptions is a rich and > > useful vein of ideas (including attaching context > > information useful in debugging or diagnosing test > > failure), but this one seems at the narrow end of that > > vein. But, if you look at catch clauses as pattern > > matches, which currently are restricted to the type of > > the exception, there is much room to refine the > > specificity of such patterns. > > WOW - the pattern matching well is bottom less. > > > Here are some relatively-unformed thoughts on how `catch` clauses could > align with patterns. > > catch (IOException e) { ... } > > This can be interpreted cleanly as a type pattern already, with the > exception thrown as the match candidate. > > Just as we have alternate constructors for most exceptions for wrapping: > > FooException() > FooException(String) > FooException(Throwable) > FooException(String, Throwable) > > we can similarly have matching deconstructors in the exception classes, > meaning we could use ordinary nested patterns to detect wrapped exceptions: > > catch (RuntimeException(IOException e)) { ... } > > If needed, guards can be introduced into catch clauses as they were in > switch cases, with basically the exact same set of rules for > dominance/exhaustiveness/flow analysis: > > catch (SqlException e) when e.getErrorCode() == 666 { ... } > > Catch clauses currently have an ad-hoc syntax for union types which is not > currently supported by type patterns, so there would have to be some > reconciliation there. > > (Usual disclaimer: this stuff is all off in the future, not currently on > the plate, deep-diving on the design now is now probably counterproductive.) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Tue Dec 5 00:05:32 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Mon, 4 Dec 2023 19:05:32 -0500 Subject: Sealed Types vs Exceptions? Message-ID: Hello Amber Dev TeaM, I learned a lot about Exceptions in the previous discussion above, so I figured I'd ask this question as well -- when does it make sense to handle exceptional cases via a Sealed Type (DivisionResult ====> Success(double answer) || DivideByZero()) vs an Exception (DivideByZeroException)? The only difference I can see is that an Exception gives you debugging details (line number, stack trace) that would be very difficult for a Sealed Type to attain. And if that is the key difference, doesn't that sort of imply that we should opt into the more info-rich source of information wherever possible? After all, debugging is hard enough and more info is to everyone's benefit, right? And most of the extra info is static (line numbers don't change). I'll avoid performance as a reason, as I don't understand the mechanics of what makes one faster or slower. Thank you all for your time and help! David Alayachew -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Tue Dec 5 00:38:29 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Mon, 4 Dec 2023 19:38:29 -0500 Subject: Some thoughts and an idea about Checked Exceptions In-Reply-To: References: <525331b8-5a56-4544-b2d3-7127a9b2c9ac@oracle.com> Message-ID: > > I actually like Checked Exceptions. I think > > that, when used correctly, they enable an > > easy to read style of programming that > > separates the mess from the happy path. > > This is an important point; for all the folks > out there who love to thump the table with > "Checked exceptions were a failed experiment", > there are plenty of people who see value in > them quietly getting work done. I actually wanted to expand on this point, in hopes that those who dislike Checked Exceptions might appreciate them more. The thing I enjoy the most about Checked Exceptions is the thing I enjoy the most about the ternary operator and Java enums -- they give me totality and exhaustiveness checking. In fact, they were some of Java's earliest forms of totality and exhaustiveness checking -- long before we ever got Sealed Types and Switch Expressions. And Exceptions do all that while giving you context and info (with minimal effort from the dev) that you would struggle to get any other way (line numbers and stack traces). -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Tue Dec 5 01:21:17 2023 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 4 Dec 2023 20:21:17 -0500 Subject: Sealed Types vs Exceptions? In-Reply-To: References: Message-ID: Error handling is hard, no matter how you slice it.? (See https://joeduffyblog.com/2016/02/07/the-error-model/ for a mature analysis of all the options.) The benefit of using algebraic data types (e.g., the Either or Try monads) for error handling is _uniformity_.? Type systems such as the simply typed lambda calculus (Peirce, Ch9) say that if we have types T and U, then the function type `T -> U` is also a type. Such function types are well behaved, for example they can be composed: if f : T -> U and g : U -> V, then g.f : T -> V. A Java method ??? static int length(String s) can be viewed as a function String -> int.? But what about a Java method ??? static int parseInt(String s) throws NumberFormatException ?? This is a _partial function_; for some inputs, parseInt() does not produce an output, instead it produces an effect (throwing an exception.)? Type systems that describe partial or effectful computations are significantly more complicated and less well-behaved. Modeling a division result as in the following Haskell data type: ??? data DivResult = Success Double | Failure means that arithmetic operations are again total: ??? DivResult divide(double a, double b) This has many benefits, in that it becomes impossible to ignore a failure, and the operation is a total function from double x double to DivResult, rather than a partial function from double x double to double.? One can represent the success-or-failure result with a single, first-class value, which means I can pass the result to other code and let it distinguish between success and failure; I don't have to deal with the failure as a side-effect in the frame or stack extent in which it was raised.? The common complaints about "lambdas don't work well with exceptions" comes from the fact that lambdas want to functions, but unless their type (modeled in Java with functional interfaces) accounts for the possibility of the exception, we have no way to tunnel the exception from the lambda frame to the invoking frame. It is indeed true that exceptions carry more information, and that information comes at a cost -- both a runtime cost (exceptions are expensive to create and have significant memory cost) and a user-model cost (exceptions are constraining to deal with, and often we just throw up our hands, log them, and move on.)? On the other hand, algebraic data types have their own costs -- wrapping result success/failure in a monadic carrier intrudes on API types and on code that consumes results. Error handling is hard, no matter how you slice it. On 12/4/2023 7:05 PM, David Alayachew wrote: > Hello Amber Dev TeaM, > > I learned a lot about Exceptions in the previous discussion above, so > I figured I'd ask this question as well -- when does it make sense to > handle exceptional cases via a Sealed Type (DivisionResult ====> > Success(double answer) || DivideByZero()) vs an Exception > (DivideByZeroException)? > > The only difference I can see is that an Exception gives you debugging > details (line number, stack trace) that would be very difficult for a > Sealed Type to attain. And if that is the key difference, doesn't that > sort of imply that we should opt into the more info-rich source of > information wherever possible? After all, debugging is hard enough and > more info is to everyone's benefit, right? > > And most of the extra info is static (line numbers don't change). I'll > avoid performance as a reason, as I don't understand the mechanics of > what makes one faster or slower. > > Thank you all for your time and help! > David Alayachew -------------- next part -------------- An HTML attachment was scrubbed... URL: From pholder at gmail.com Tue Dec 5 09:00:50 2023 From: pholder at gmail.com (P Holder) Date: Tue, 5 Dec 2023 04:00:50 -0500 Subject: Auto indexing improved for() loops Message-ID: I'm participating in Advent of Code 2023 using Java. It reminds me how I frequently wish I could have an index in a modern for loop without having to resort to using the old/traditional for loop. Now that the JEP 456 Unnamed Variables & Patterns is progressing nicely I can see how it could be used implicitly to grant my wish. Instead of writing: final String[] strings = getSomeStrings(); int index = 0; for (final String str : strings) { // use the str and the index index++; } and risking the chance I may forget to place the index++ ... I would rather have something like: final String[] strings = getSomeStrings(); for (int index, final String str : strings) { // use the str and the index } where the new part "int index" is optional, and the compiler could treat it like "int _" if not specified. Of course it could also be long, one assumes. I do realize it's merely syntactic sugar and I do know how to write the code I need, but it does surprise me the number of times I end up writing old for loops simply because I could use the index, but otherwise know doing it the modern way is cleaner and more expressive, if only I had the index too. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Tue Dec 5 13:52:35 2023 From: brian.goetz at oracle.com (Brian Goetz) Date: Tue, 5 Dec 2023 08:52:35 -0500 Subject: Auto indexing improved for() loops In-Reply-To: References: Message-ID: The issue you raise -- that the for-each loop does not give you access to the index, and that to get it you have to fall all the way back to an old-style iterator loop -- is a valid concern (in fact, I raised this as a comment during JSR 201.) The syntax that you propose looks pretty "nailed on", though; there are contexts in the language where a single variable declaration is used in conjunction with some other syntax construct, including the foreach loop: ??? for (VariableDecl : Expression) and contexts where there are multiple variable declarations separated by commas (such as method declaration), but no context where there are exactly two *and then some weird stuff*.? There is no existing model here to appeal to about what ??? for (VariableDecl, VariableDecl : Expression) would mean, which adds cognitive load for users (among other things.) A more grounded approach would be something like: ??? interface ListIndex { ??????? int index(); ??????? T element(); ??? } and allow a ListIndex to be used as the induction variable for an iteration: ??? for (ListIndex i : strings) { ??????? ... i.element()? ... i.index() ??? } There are two "obvious" objections; one is that it is more wordy, and the other is "mumble mumble but performance".? But the latter goes away with Valhalla, so let's not speak of it again. It seems something we could pursue at some point in the future, but probably after Valhalla. On 12/5/2023 4:00 AM, P Holder wrote: > I'm participating in Advent of Code 2023 using Java.? It reminds me > how I frequently wish I could have an index in a modern for loop > without having to resort to using the old/traditional for loop.? Now > that the JEP 456 Unnamed Variables & Patterns is progressing nicely I > can see how it could be used implicitly to grant my wish. > > Instead of writing: > > final String[] strings = getSomeStrings(); > int index = 0; > for (final String str : strings) > { > ? // use the str and the index > ? index++; > } > > and risking the chance I may forget to place the index++ ... I would > rather have something like: > > final String[] strings = getSomeStrings(); > for (int index, final String str : strings) > { > ? // use the str and the index > } > > where the new part "int index" is optional, and the compiler could > treat it like "int _" if not specified.? Of course it could also be > long, one assumes. > > I do realize it's merely syntactic sugar and I do know how to write > the code I need, but it does surprise me the number of times I end up > writing old for loops simply because I could use the index, but > otherwise know doing it the modern way is cleaner and more expressive, > if only I had the index too. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Wed Dec 6 02:40:44 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Tue, 5 Dec 2023 21:40:44 -0500 Subject: Sealed Types vs Exceptions? In-Reply-To: References: Message-ID: Hello Brian, Thank you for your response! > Error handling is hard, no matter how you slice > it. (See > https://joeduffyblog.com/2016/02/07/the-error-model/ > for a mature analysis of all the options.) That was a really nice read! And yes, it was definitely comprehensive, no question. I especially appreciated the focus on how Java does Checked Exceptions and Unchecked Exceptions, as well as the strengths and weaknesses of both. I had never really done much trying to make or catch Unchecked Exceptions, but seeing how they "poison the water" was insightful. It was also really interesting to see them talk about how they used assert. Makes me wonder what Java would look like if we had precondition/postcondition tools like that. Or if assert had meaning in the method signature. > The benefit of using algebraic data types > (e.g., the Either or Try monads) for error > handling is _uniformity_. Type systems such as > the simply typed lambda calculus (Peirce, Ch9) > say that if we have types T and U, then the > function type `T -> U` is also a type. Such > function types are well behaved, for example > they can be composed: > if f : T -> U and g : U -> V, then g.f : T -> V > > A Java method > > static int length(String s) > > can be viewed as a function String -> int. But > what about a Java method > > static int parseInt(String s) throws NumberFormatException > > ? This is a _partial function_; for some > inputs, parseInt() does not produce an output, > instead it produces an effect (throwing an > exception.) Type systems that describe partial > or effectful computations are significantly > more complicated and less well-behaved. > > Modeling a division result as in the following > Haskell data type: > > data DivResult = Success Double | Failure > > means that arithmetic operations are again total: > > DivResult divide(double a, double b) > > This has many benefits, in that it becomes > impossible to ignore a failure, and the > operation is a total function from double x > double to DivResult, rather than a partial > function from double x double to double. One > can represent the success-or-failure result > with a single, first-class value, which means I > can pass the result to other code and let it > distinguish between success and failure; I > don't have to deal with the failure as a > side-effect in the frame or stack extent in > which it was raised. The common complaints > about "lambdas don't work well with exceptions" > comes from the fact that lambdas want to be > functions, but unless their type (modeled in > Java with functional interfaces) accounts for > the possibility of the exception, we have no > way to tunnel the exception from the lambda > frame to the invoking frame. So, this was interesting to read. And they said something similar in the article that you linked. On the one hand, that uniformity allows us to cleanly and easily latch/compose things in a happy path sort of way. But maybe I am wrong here, but when we are talking about Sealed Types (Algebraic Data Types), I find that most functions attempt to handle "the good values" of the ADT. Obviously, there are things like Stream or Optional that actually do behave the way that this is described as, but how often are we writing something that rich and involved? Usually, we are doing something more akin to DivisionResult. And in those cases, do we really want to hand over DivisionResult as a parameter as opposed to Success? I guess I just don't see much value in passing the "uncertainty" over to the next level unless that uncertainty is the desirable part (again, Stream and Optional). My reasoning for this is from an article by Alexis King (Lexi Lambda) -- "Parse, don't validate" ( https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/). For me, ADT's are at their best when you can extract the good from the bad, then pass along the safer, more strongly typed good, and then handle the safer, more strongly typed bad as desired. The idea of continuing to pass along the "Unknown" type seems to me like passing along the uncertainty when I could have certainty - whether good or bad. > It is indeed true that exceptions carry more > information, and that information comes at a > cost -- both a runtime cost (exceptions are > expensive to create and have significant memory > cost) and a user-model cost (exceptions are > constraining to deal with, and often we just > throw up our hands, log them, and move on.) > On the other hand, algebraic data types have > their own costs -- wrapping result > success/failure in a monadic carrier intrudes > on API types and on code that consumes results. Ok, cool. So it's a trade off between performance and information. I guess that leads to the next question then -- when does it make sense to include that level of information? Knowing that this is the primary difference sends me down some interesting and probably incorrect trains of thought. If the difference is purely a contextual one, then unless I want context, I should default to an ADT. And it is only when I want context that I should opt into Exceptions. How do you feel about that? But that gets even more nuanced because then I am missing out on that chaining map-reduce style flow state that ADT's get. Exceptions don't have that. It really feels like I am comparing apples and oranges, and it's not really a matter of which is better, as opposed to if my context happens to want apples vs oranges at that particular moment. I can see use cases for both. For example, if I wanted to implement retry functionality, it becomes clear almost immediately that Exceptions are a much better vehicle for this then ADT's. Yes, you could easily retry if ServiceException or something. But we don't want to retry infinitely. Thus, you need to know which function and in which context we failed. And what if the second time we reattempt, we get further but get a ServiceException on the next statement? Start from 0 or continue on? What if the user wants the choice? All of this is way easier if you have a stack trace. Whereas for ADT's, the only (easy) way you could do that is if you pass in an instance of your ADT as a parameter of some sort (or a ScopedValue, now that that is an option available to us), and even then, you would end up recreating the concept of a stack trace. Exceptions give you a stack trace, they throw and rewrap with less ceremony, and all while uncluttering your happy path, even more than ADT's. ADT's make everything, success and failure, a first class citizen, which, like you said, forces everyone to hold the exceptional cases front and center, regardless of how important they are. If that hurts your API's ergonomics, well, you opted into that. > Error handling is hard, no matter how you slice > it. I see that much more now. Still, I am making excellent progress in mastering that, especially in these past few days. Thank you for your time and help! David Alayachew -------------- next part -------------- An HTML attachment was scrubbed... URL: From holo3146 at gmail.com Wed Dec 6 08:22:56 2023 From: holo3146 at gmail.com (Holo The Sage Wolf) Date: Wed, 6 Dec 2023 10:22:56 +0200 Subject: Sealed Types vs Exceptions? In-Reply-To: References: Message-ID: David, The discussion about checked vs. unchecked exceptions is quite common. The blog Brain linked is very vocal about their opinion, but it is far from the only opinion. I know people who prefer by a great margin the ease of use of unchecked exception, and how you can handle them without the restrain of the language type system (see below for more info). To be clear, I completely agree with this view and I am a strong believer that checked exceptions are better, and that their potential is not fully realized in Java, see https://koka-lang.github.io/koka/doc/index.html for a true full implementation of effect system (generalisation of checked exceptions). > I find that most functions attempt to handle "the good values" of the ADT. Just like how people like uniformity in the type system, people like uniformity in their code. We try to put all failures in few as possible boxes. When using Java's exceptions it is "exception", "runtimeException" and "error", all of which are under "throwable", in ADT it is usually "optional" and "either" where either has generic parameters (which replace the inheritance we use in exceptions), (in this case DivResult will be an instance of either). When you have uniformity, you can easily delay handling the failure case to the appropriate time. In addition, "living in the monad space" usually simplify logic, instead of going back and force from values to monads: Say I work on float type/double type, and I write a method that tries to solve an equation: Either<...> solve(String expr) (The generic type of the result omitted on purpose) The flow of this method is "validate input" -> "simplify" -> "validate again" (maybe after simplifying there is division by 0, or root-if-negativr) -> "try to solve with a strategy" -> "collect results of strategies". Now, failures of "strategy" need to be handled in "solve", but failures in "validate" or "simplify" need to pass to the return value. Now you can implement it as: var validation = validate(expr) if (validation is Either.right(_)) return validation var simplifyExpr = simplify(expr) if ... Or: return validate(expr) .mapLeft(simplify) .mapLeft(...) Or var validate = validate(expr) var simplified = simplify(expr) var .... While in the last solution, all of the methods looks like: switch(input) Either.right(_) -> input Either.left(var val) -> ... I think that the second and third options are obviously much clearer (especially if you consider "if" to be an anti pattern), the advantages of the last option is that it solves the common problem with pipes like code, of having dependency on older variables (as the absolute majority of programming languages are not affine) Now, if your language support monads, then just how you can lift a value into Either, you could lift a function U->T and U->Either[T,_] into Either[U,_]->Either[T,_], so when you have a list of methods it is a lot more natural to just work with the same monad all the way (but of course, if you know how to handle *all* the failure options gracefully, you should lift it down before returning, and if the caller need, they will lift your method). > I am missing out on that chaining map-reduce style flow state that ADT's get If you take a look at Koka I linked above, you can see that it is actually possible to have chaining with effect based system. Java just don't have sum and difference types on generic level. Which is a big reason why some people love unchecked exceptions. If semantically you know about all of the exceptions, you can do stuff like: Assume func is a method whose possible exceptions are unchecked exceptions, A,B,C: Stream.generate(...) .handle(A.class, (A a) -> handleA(...)) .handle(B.class, (B b) -> handleB(...)) .forEach(func) Then you know that the above chain type is: _->() throws A+B+C-A-B+rA+rB (Where handleX is of type X->() throws rX) Java's type system don't have this ability, so you either have all of your lambdas throw Throwable, or use unchecked exceptions. > comparing apples and oranges These are 2 possible designs for languages/systems, so it is important to compare them. On Wed, 6 Dec 2023, 07:36 David Alayachew, wrote: > Hello Brian, > > Thank you for your response! > > > Error handling is hard, no matter how you slice > > it. (See > > https://joeduffyblog.com/2016/02/07/the-error-model/ > > for a mature analysis of all the options.) > > That was a really nice read! And yes, it was definitely comprehensive, no > question. > > I especially appreciated the focus on how Java does Checked Exceptions and > Unchecked Exceptions, as well as the strengths and weaknesses of both. I > had never really done much trying to make or catch Unchecked Exceptions, > but seeing how they "poison the water" was insightful. It was also really > interesting to see them talk about how they used assert. Makes me wonder > what Java would look like if we had precondition/postcondition tools like > that. Or if assert had meaning in the method signature. > > > The benefit of using algebraic data types > > (e.g., the Either or Try monads) for error > > handling is _uniformity_. Type systems such as > > the simply typed lambda calculus (Peirce, Ch9) > > say that if we have types T and U, then the > > function type `T -> U` is also a type. Such > > function types are well behaved, for example > > they can be composed: > > if f : T -> U and g : U -> V, then g.f : T -> V > > > > A Java method > > > > static int length(String s) > > > > can be viewed as a function String -> int. But > > what about a Java method > > > > static int parseInt(String s) throws NumberFormatException > > > > ? This is a _partial function_; for some > > inputs, parseInt() does not produce an output, > > instead it produces an effect (throwing an > > exception.) Type systems that describe partial > > or effectful computations are significantly > > more complicated and less well-behaved. > > > > Modeling a division result as in the following > > Haskell data type: > > > > data DivResult = Success Double | Failure > > > > means that arithmetic operations are again total: > > > > DivResult divide(double a, double b) > > > > This has many benefits, in that it becomes > > impossible to ignore a failure, and the > > operation is a total function from double x > > double to DivResult, rather than a partial > > function from double x double to double. One > > can represent the success-or-failure result > > with a single, first-class value, which means I > > can pass the result to other code and let it > > distinguish between success and failure; I > > don't have to deal with the failure as a > > side-effect in the frame or stack extent in > > which it was raised. The common complaints > > about "lambdas don't work well with exceptions" > > comes from the fact that lambdas want to be > > functions, but unless their type (modeled in > > Java with functional interfaces) accounts for > > the possibility of the exception, we have no > > way to tunnel the exception from the lambda > > frame to the invoking frame. > > So, this was interesting to read. And they said something similar in the > article that you linked. > > On the one hand, that uniformity allows us to cleanly and easily > latch/compose things in a happy path sort of way. > > But maybe I am wrong here, but when we are talking about Sealed Types > (Algebraic Data Types), I find that most functions attempt to handle "the > good values" of the ADT. Obviously, there are things like Stream or > Optional that actually do behave the way that this is described as, but how > often are we writing something that rich and involved? Usually, we are > doing something more akin to DivisionResult. And in those cases, do we > really want to hand over DivisionResult as a parameter as opposed to > Success? I guess I just don't see much value in passing the "uncertainty" > over to the next level unless that uncertainty is the desirable part > (again, Stream and Optional). > > My reasoning for this is from an article by Alexis King (Lexi Lambda) -- > "Parse, don't validate" ( > https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/). For > me, ADT's are at their best when you can extract the good from the bad, > then pass along the safer, more strongly typed good, and then handle the > safer, more strongly typed bad as desired. The idea of continuing to pass > along the "Unknown" type seems to me like passing along the uncertainty > when I could have certainty - whether good or bad. > > > It is indeed true that exceptions carry more > > information, and that information comes at a > > cost -- both a runtime cost (exceptions are > > expensive to create and have significant memory > > cost) and a user-model cost (exceptions are > > constraining to deal with, and often we just > > throw up our hands, log them, and move on.) > > On the other hand, algebraic data types have > > their own costs -- wrapping result > > success/failure in a monadic carrier intrudes > > on API types and on code that consumes results. > > Ok, cool. So it's a trade off between performance and information. > > I guess that leads to the next question then -- when does it make sense to > include that level of information? Knowing that this is the primary > difference sends me down some interesting and probably incorrect trains of > thought. If the difference is purely a contextual one, then unless I want > context, I should default to an ADT. And it is only when I want context > that I should opt into Exceptions. How do you feel about that? > > But that gets even more nuanced because then I am missing out on that > chaining map-reduce style flow state that ADT's get. Exceptions don't have > that. It really feels like I am comparing apples and oranges, and it's not > really a matter of which is better, as opposed to if my context happens to > want apples vs oranges at that particular moment. > > I can see use cases for both. For example, if I wanted to implement retry > functionality, it becomes clear almost immediately that Exceptions are a > much better vehicle for this then ADT's. Yes, you could easily retry if > ServiceException or something. But we don't want to retry infinitely. Thus, > you need to know which function and in which context we failed. And what if > the second time we reattempt, we get further but get a ServiceException on > the next statement? Start from 0 or continue on? What if the user wants the > choice? All of this is way easier if you have a stack trace. Whereas for > ADT's, the only (easy) way you could do that is if you pass in an instance > of your ADT as a parameter of some sort (or a ScopedValue, now that that is > an option available to us), and even then, you would end up recreating the > concept of a stack trace. Exceptions give you a stack trace, they throw and > rewrap with less ceremony, and all while uncluttering your happy path, even > more than ADT's. ADT's make everything, success and failure, a first class > citizen, which, like you said, forces everyone to hold the exceptional > cases front and center, regardless of how important they are. If that hurts > your API's ergonomics, well, you opted into that. > > > Error handling is hard, no matter how you slice > > it. > > I see that much more now. Still, I am making excellent progress in > mastering that, especially in these past few days. > > Thank you for your time and help! > David Alayachew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.spangenberg at hotmail.de Wed Dec 6 23:46:14 2023 From: johannes.spangenberg at hotmail.de (Johannes Spangenberg) Date: Thu, 7 Dec 2023 00:46:14 +0100 Subject: Auto indexing improved for() loops In-Reply-To: References: Message-ID: > A more grounded approach would be something like: > > ??? interface ListIndex { > ??????? int index(); > ??????? T element(); > ??? } > > and allow a ListIndex to be used as the induction variable for an > iteration: > > ??? for (ListIndex i : strings) { > ??????? ... i.element()? ... i.index() > ??? } Note that you can already archive something similar with an utility method. record ListIndex(int index, T element) {} static Iterable> enumerate(T[] array) { return enumerate(Arrays.asList(array)); } static Iterable> enumerate(Iterable iterable) { return () -> new Iterator<>() { private final Iterator iterator = iterable.iterator(); private int nextIndex; @Override public boolean hasNext() { return iterator.hasNext(); } @Override public ListIndex next() { return new ListIndex<>(nextIndex++, iterator.next()); } }; } The name of the method is based on enumerate(?) in Python . Here is how you may use the method: String[] strings = getSomeStrings(); for (ListIndex item : enumerate(strings)) { System.out.println(item.index() + ": " + item.element()); } Unfortunately, Record Patterns in enhanced for loops have been removed by JEP 440 in Java 21. With enabled preview features, you were able to write the following in Java 20: String[] strings = getSomeStrings(); for (ListIndex(int index, String element) : enumerate(strings)) { System.out.println(index + ": " + element); } Let's hope something similar will be re-introduced in the future. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kan.izh at gmail.com Thu Dec 7 21:05:56 2023 From: kan.izh at gmail.com (Anatoly Kupriyanov) Date: Thu, 7 Dec 2023 21:05:56 +0000 Subject: Auto indexing improved for() loops In-Reply-To: References: Message-ID: > > Unfortunately, Record Patterns in enhanced for loops have been removed by JEP > 440 in Java 21. With enabled preview > features, you were able to write the following in Java 20: > > String[] strings = getSomeStrings(); > for (ListIndex(int index, String element) : enumerate(strings)) { > System.out.println(index + ": " + element); > } > > To be fair it looks ugly, especially the fact that you need to specify exact types. I would expect use of "var", at least for the element type. Technically it could be implemented easily as an extension for streams-api, no changes in the language required. Stream.of(strings).forEachIndexed((index, element) -> {//BiFunction System.out.println(index + ": " + element); }); Or even make stream modifier to have allow proper chaining: Stream.of(strings) .indexed() .filter(i -> i.index() % 3 == 0 && !i.element().isEmpty()) .forEach(i -> { System.out.println(i.index() + ": " + i.element()); }); This thing could also be parallelStream() -- WBR, Anatoly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Sat Dec 9 08:33:37 2023 From: forax at univ-mlv.fr (Remi Forax) Date: Sat, 9 Dec 2023 09:33:37 +0100 (CET) Subject: Auto indexing improved for() loops In-Reply-To: References: Message-ID: <1010183041.77141276.1702110817516.JavaMail.zimbra@univ-eiffel.fr> > From: "P Holder" > To: "amber-dev" > Sent: Tuesday, December 5, 2023 10:00:50 AM > Subject: Auto indexing improved for() loops Hello, > I'm participating in Advent of Code 2023 using Java. So am i, but no loops only streams :). [1] > It reminds me how I frequently wish I could have an index in a modern for loop > without having to resort to using the old/traditional for loop. Now that the > JEP 456 Unnamed Variables & Patterns is progressing nicely I can see how it > could be used implicitly to grant my wish. > Instead of writing: > final String[] strings = getSomeStrings(); > int index = 0; > for (final String str : strings) > { > // use the str and the index > index++; > } > and risking the chance I may forget to place the index++ ... I would rather have > something like: > final String[] strings = getSomeStrings(); > for (int index, final String str : strings) > { > // use the str and the index > } > where the new part "int index" is optional, and the compiler could treat it like > "int _" if not specified. Of course it could also be long, one assumes. > I do realize it's merely syntactic sugar and I do know how to write the code I > need, but it does surprise me the number of times I end up writing old for > loops simply because I could use the index, but otherwise know doing it the > modern way is cleaner and more expressive, if only I had the index too. One advantage of the current design is that it makes the intent of the developer clear, if it's a simple loop, use the enhanced for loop, if it's more complex, then use the classical C for loop (both IntelliJ and Eclipse knows how to go back and forth). By introducing an index in the design, you are muddying the water because now you can have loop that does side effect on the data structure you are looping on, By example, I'm not sure i like the following code List list = ... for(int index, String s : strings) { strings.set(index, ...); } Also, looping with an index on a List is a kind of dangerous if you do not know the implementation of the List, With the code above if 'strings' being a LinkedList, the worst case complexity is O(n2). Again, not something we want people to write. I think i would prefer to have to have an indexed stream more than indexed loop, the good news is that it seems something we can do using the gatherer API [2] and Valhalla (to avoid the cost of creating a a pair (index, element) for each element). regards, R?mi [1] [ https://github.com/forax/advent-of-code-2023 | https://github.com/forax/advent-of-code-2023 ] [2] [ https://openjdk.org/jeps/461 | https://openjdk.org/jeps/461 ] -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.spangenberg at hotmail.de Sat Dec 9 17:02:21 2023 From: johannes.spangenberg at hotmail.de (Johannes Spangenberg) Date: Sat, 9 Dec 2023 18:02:21 +0100 Subject: Auto indexing improved for() loops In-Reply-To: <1010183041.77141276.1702110817516.JavaMail.zimbra@univ-eiffel.fr> References: <1010183041.77141276.1702110817516.JavaMail.zimbra@univ-eiffel.fr> Message-ID: > One advantage of the current design is that it makes the intent of the > developer clear I am also not in favor of the initial proposal, but I share the general concern. I see the pain point. Regarding the initial proposal: > for(int index, String s : strings) { I think this solution would be too inflexible. Extending the syntax of the language only for this one very specific scenario seems not justified to me. I think there should rather be a focus on re-introducing and simplifying pattern matching in enhanced for-loops. Let's consider my previous example of what was possible in Java 20: for (ListIndex(int index, String element) : enumerate(strings)) { The compiler could be updated to infer the Pattern and support the following expression: for ((int index, String element) : enumerate(strings)) { Or when using var: for ((var index, var element) : enumerate(strings)) { By keeping the method call on the right, we greatly improve the flexibility. Let's for example look at all the following functions shipped with Python. They could all benefit from this syntax if they were ported to Java. * enumerate(iterable, start=0) * zip(*iterables, strict=False) * pairwise(iterable) * groupby(iterable, key=None) * product(*iterables, repeat=1) * combinations(iterable, r) * permutations(iterable, r=None) Besides, I think inferring the pattern is not only useful in loops, but also in switch expressions: return switch (pair(state, isWaiting)) { case (INITIALIZATION, false) -> "Initializing task"; case (INITIALIZATION, true ) -> "Waiting for an external process before continuing with the initialization"; case (IN_PROGRESS , false) -> "Task in progress"; case (IN_PROGRESS , true ) -> "Waiting for an external process"; case (FINISHED , _ ) -> "Task finished"; case (CANCELED , _ ) -> "Task canceled"; }; I have to admit that adding `Pair` after `case` might not be that big of a deal in this case, but note that in some cases, the name of the type might be much longer, significantly increasing the noise. > I think i would prefer to have to have an indexed stream more than > indexed loop Note that checked exceptions and streams do not work well together. At least not in the current state of Java. For the time being, I would therefore favor the enhanced for loop. (It might be possible to fix the interoperability of checked exceptions and streams with union types or varargs in type parameters, but neither is planned as far as I know.) > the good news is that it seems something we can do using the gatherer > API [2] and Valhalla (to avoid the cost of creating a a pair (index, > element) for each element). I was wondering if the JIT would already optimize the overhead away. I ran some benchmarks using JMH on the enumerate(...) method I introduced earlier. As you are the second person mentioning Valhalla out of performance concerns, I thought I share my results. fori (OpenJDK 17) -> enhanced_for (OpenJDK 17) ? + 7 % fori (OpenJDK 21) -> enhanced_for (OpenJDK 21) ? -34 % enhanced_for (OpenJDK 17) -> enhanced_for (OpenJDK 21) ? -29 % fori (OpenJDK 21) -> enhanced_for (OpenJDK 17) ? - 8 % With OpenJDK 17, my?high-level enumerate(...) method was actually 7 % faster then a low-level old-style for-loop. However, in later versions of OpenJDK, the high-level code got much slower. You can find the benchmark implementation at GitHub . The benchmark was running within WSL2 and Ubuntu 20.04 on an i7-3770 from 2012. # VM version: JDK 17.0.7, OpenJDK 64-Bit Server VM, 17.0.7+7-nixos Benchmark Mode Cnt Score Error Units EnhancedForHelper.enhanced_for thrpt 10 588852.311 ? 3783.862 ops/s EnhancedForHelper.fori thrpt 10 551406.193 ? 1172.687 ops/s # VM version: JDK 21, OpenJDK 64-Bit Server VM, 21+35-nixos Benchmark Mode Cnt Score Error Units EnhancedForHelper.enhanced_for thrpt 10 419723.971 ? 8903.577 ops/s EnhancedForHelper.fori thrpt 10 640767.173 ? 2829.187 ops/s # VM version: JDK 20, OpenJDK 64-Bit Server VM, 20+36-nixos Benchmark Mode Cnt Score Error Units EnhancedForHelper.enhanced_for thrpt 10 430022.265 ? 3050.285 ops/s EnhancedForHelper.enhanced_for_with_pattern_matching thrpt 10 325179.547 ? 5206.194 ops/s EnhancedForHelper.fori thrpt 10 631755.837 ? 20495.694 ops/s I also run the Benchmark with Azul Zing for Java 21, which uses LLVM for the JIT optimizations. It was about 51 % faster then the fastest run I have seen with OpenJDK. However, the warmup-time was noticeably longer. There was no big difference between both loops. # VM version: JDK 21.0.1, Zing 64-Bit Tiered VM, 21.0.1-zing_23.10.0.0-b3-product-linux-X86_64 # *** WARNING: JMH support for this VM is experimental. Be extra careful with the produced data. Benchmark Mode Cnt Score Error Units EnhancedForHelper.enhanced_for thrpt 10 978782.093 ? 4838.520 ops/s EnhancedForHelper.fori thrpt 10 965482.460 ? 17837.251 ops/s I have also seen some results with GraalVM for Java 21, but I don't have the exact numbers on hand. In general, Native Image was very slow on Windows, but competitive with OpenJDK on Linux. The GraalVM JDK (no Native Image) was about 40% faster then OpenJDK 21 and there was no measurable difference between fori and enhanced_for on Linux. Disclaimer: This is just a micro-benchmark. We don't know how all of this translates to real-world applications. I still find it interesting how different the optimizations are. I am also a bit concerned that OpenJDK 21 got noticeable slower with the high-level code compared to OpenJDK 17. I am eager to find out if we see a noticeable difference in our end-to-end benchmarks when we move forward to OpenJDK 21 at my workplace. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Tue Dec 12 05:11:59 2023 From: davidalayachew at gmail.com (David Alayachew) Date: Tue, 12 Dec 2023 00:11:59 -0500 Subject: Pattern Matching - why "when" instead of "if"? In-Reply-To: References: Message-ID: Hello, Questions about Pattern-Matching belong on the project that introduced them -- Project Amber. Here is their mailing list. But I have already CC'd them on this email. As for your question, I myself asked almost the exact same question and got an answer straight from the horse's mouth. Here is a link. https://mail.openjdk.org/pipermail/amber-dev/2022-November/007603.html Let me know if you would like any clarifications. Thank you for reaching out! David Alayachew On Mon, Dec 11, 2023 at 9:00?PM Lou ? wrote: > Addendum: Why got the previous "&&" syntax dropped in favor of "when"? > > P.S.: I hope this response gets correctly added to the thread: > https://mail.openjdk.org/pipermail/loom-dev/2023-December/006339.html > Sorry if it doesn't. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Sat Dec 16 17:32:59 2023 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Sat, 16 Dec 2023 11:32:59 -0600 Subject: Frozen objects? Message-ID: Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': - Granularity: - C: Any contiguous memory region that has a language name/identification - Java: At most 64 bits at a time (*) and arrays are not included - Advantage: C - Enforcement: - C: Enforced only by the compiler (mostly) - Java: Enforced by the compiler and at runtime - Advantage: Java - Dynamic Application: - C: Yes - Java: No - Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... - (-) All mutations to a Freezable would require a new 'frozen' check (* see below) - (-) There would have to be a new bit allocated in the object header - (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) - (+) JIT optimizations for constant-folding, etc. - (+) GC optimizations - (*) Put frozen objects into a read-only region of memory to eliminate mutation checks - Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus at headcrashing.eu Sat Dec 16 17:55:25 2023 From: markus at headcrashing.eu (Markus Karg) Date: Sat, 16 Dec 2023 18:55:25 +0100 Subject: AW: Frozen objects? In-Reply-To: References: Message-ID: <001401da3049$0c0e98c0$242bca40$@eu> It was just today that I asked Brian for frozen arrays. I would love to get this, as it could help us with performance problems in IO and NIO. -Markus Von: amber-dev [mailto:amber-dev-retn at openjdk.org] Im Auftrag von Archie Cobbs Gesendet: Samstag, 16. Dezember 2023 18:33 An: amber-dev Betreff: Frozen objects? Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': * Granularity: * C: Any contiguous memory region that has a language name/identification * Java: At most 64 bits at a time (*) and arrays are not included * Advantage: C * Enforcement: * C: Enforced only by the compiler (mostly) * Java: Enforced by the compiler and at runtime * Advantage: Java * Dynamic Application: * C: Yes * Java: No * Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... * (-) All mutations to a Freezable would require a new 'frozen' check (* see below) * (-) There would have to be a new bit allocated in the object header * (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) * (+) JIT optimizations for constant-folding, etc. * (+) GC optimizations * (*) Put frozen objects into a read-only region of memory to eliminate mutation checks * Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Sat Dec 16 19:24:19 2023 From: forax at univ-mlv.fr (Remi Forax) Date: Sat, 16 Dec 2023 20:24:19 +0100 (CET) Subject: Frozen objects? In-Reply-To: References: Message-ID: <938945939.83758350.1702754659136.JavaMail.zimbra@univ-eiffel.fr> Hello, As part of Valhalla, the VM support for what you want to do already exists because we need something similar to be able to deserialize a value class. jdk.internal.misc.Unsafe provides several methods: makePrivateBuffer(), put*() and finishPrivateBuffer() [1]. Compared to your interface Freezable, the method equivalent of the method freeze(), finishPrivateBuffer() needs to returns a value so the VM/JIT can separate the reference to a piece of memory which is mutable from the reference to the same piece of memory which is frozen. regards, R?mi [1] [ https://github.com/openjdk/valhalla/blob/lworld/src/java.base/share/classes/jdk/internal/misc/Unsafe.java#L299 | https://github.com/openjdk/valhalla/blob/lworld/src/java.base/share/classes/jdk/internal/misc/Unsafe.java#L299 ] > From: "Archie Cobbs" > To: "amber-dev" > Sent: Saturday, December 16, 2023 6:32:59 PM > Subject: Frozen objects? > Caveat: I'm just trying to educate myself on what's been discussed in the past, > not actually suggest a new language feature. I'm sure this kind of idea has > been discussed before so feel free to point me at some previous thread, etc. > In C we have 'const' which essentially means "the memory allocated to this thing > is immutable". The nice thing about 'const' is that it can apply to an > individual variable or field in a structure, or it can apply to an entire C > structure or C array. In effect it applies to any contiguous memory region that > can be named/identified at the language level. > On the other hand, it's just a language fiction, i.e., it can always be defeated > at runtime by casting (except for static constants). > In Java we have 'final' which (in part) is like 'const' for fields and > variables, but unlike C 'final' can't be applied to larger memory regions like > entire objects or entire arrays. > In C, 'const' can be applied "dynamically" in the sense I can cast foo to const > foo. Of course, this is only enforced at the language level. > Summary of differences between C 'const' and Java 'final': > * Granularity: > * C: Any contiguous memory region that has a language name/identification > * Java: At most 64 bits at a time (*) and arrays are not included > * Advantage: C > * > Enforcement: > * C: Enforced only by the compiler (mostly) > * Java: Enforced by the compiler and at runtime > * Advantage: Java > * > Dynamic Application: > * C: Yes > * Java: No > * Advantage: C > (*) With records and value objects we are gradually moving towards the ability > for larger things than an individual field to be 'const'. More generally, Java > has slowly been glomming on some of the goodness from functional programming, > including making it easier to declare and work with immutable data. > This all begs the question: why not take this idea to its logical conclusion? > And while we're at it, make the capability fully dynamic, instead of limiting > when you can 'freeze' something construction time? > In other words, add the ability to "freeze" an object or array. If 'x' is > frozen, whatever 'x' directly references becomes no longer mutable. > A rough sketch... > Add new Freezable interface: > public interface Freezable { > boolean isFrozen(); > static boolean freeze(Freezable obj); // returns false if already frozen > } > Arrays automatically implement Freezable (just like they do Cloneable ) > What about the memory model? Ideally it would work as if written like this: > public class Foo implements Freezable { > private volatile frozen; // set to true by Freezable.freeze() > void mutateFooContent(Runnable mutation) { > if (this.frozen) > throw new FrozenObjectException(); > else > mutation.run(); > } > } > But there could be a better trade-off of performance vs. semantics. > Other trade-offs... > * (-) All mutations to a Freezable would require a new 'frozen' check (* see > below) > * (-) There would have to be a new bit allocated in the object header > * (+) Eliminate zillions of JDK defensive array copies (things like > String.toCharArray() ) > * (+) JIT optimizations for constant-folding, etc. > * (+) GC optimizations > * (*) Put frozen objects into a read-only region of memory to eliminate mutation > checks > * Optimize scanning of frozen references (since they never change) > I'm curious how other people think this idea would or wouldn't make sense for > Java & what's been decided in the past. > Thanks, > -Archie > -- > Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sat Dec 16 20:38:22 2023 From: brian.goetz at oracle.com (Brian Goetz) Date: Sat, 16 Dec 2023 15:38:22 -0500 Subject: Frozen objects? In-Reply-To: References: Message-ID: <839ed79a-0866-4501-aa97-9768e8254570@oracle.com> These are subtle issues.? Here are some considerations to think about. Even final fields are mutable at certain points in their lifecycle, such as during construction.? There are verifier rules that will let a final field be mutated by the constructor declaring the field (even multiple times in the same constructor invocation, which the language prohibits but the VM allows.) There are also some off-label channels for mutating final fields, such as during deserialization, and reflection also offers the ability to bust finality and access control on some fields through setAccessible, but the set of limitations on that is growing (good). Ignoring whether "final means final" (grr), classes already offer some ability to provide freezing through the use of final fields. And value objects will take this further, giving the VM permission to freely scalarize and reassemble value objects as needed.? This provides much of the benefit you hope to get from freezing, in that it tells the VM at the aggregate level that the object need not be copied (and also, can be copied freely.) Where we have a real gap is with arrays; we cannot at present make arrays unmodifiable.? This is not an irremediable, in the sense that we already have error paths on `aastore` (dynamic type checks and ArrayStoreException.)? But what is missing is the programming model, because arrays lack constructors -- there's no body of code in which we can draw the circle of mutability for arrays as we can with objects.? We've discussed two ways to do this: ?- a primitive for allocating an array and running a function to initialize every element, which is guaranteed to run successfully for each element before the reference is dispensed; ?- a "freeze" operation on arrays, which acts like a copy, but if the array is already frozen just returns its own reference. Both of these have their uses. On 12/16/2023 12:32 PM, Archie Cobbs wrote: > Caveat: I'm just trying to educate myself on what's been discussed in > the past, not actually suggest a new language feature. I'm sure this > kind of idea has been discussed before so feel free to point me at > some previous thread, etc. > > In C we have 'const' which essentially means "the memory allocated to > this thing is immutable". The nice thing about 'const' is that it can > apply to an individual variable or field in a structure, or it can > apply to an entire C structure or C array. In effect it applies to any > contiguous memory region that can be named/identified at the language > level. > > On the other hand, it's just a language fiction, i.e., it can always > be defeated at runtime by casting (except for static constants). > > In Java we have 'final' which (in part) is like 'const' for fields and > variables, but unlike C 'final' can't be applied to larger memory > regions like entire objects or entire arrays. > > In C, 'const' can be applied "dynamically" in the sense I can cast foo > to const foo. Of course, this is only enforced at the language level. > > Summary of differences between C 'const' and Java 'final': > > * Granularity: > o C: Any contiguous memory region that has a language > name/identification > o Java: At most 64 bits at a time (*) and arrays are not included > o Advantage: C > * Enforcement: > o C: Enforced only by the compiler (mostly) > o Java: Enforced by the compiler and at runtime > o Advantage: Java > * Dynamic Application: > o C: Yes > o Java: No > o Advantage: C > > (*) With records and value objects we are gradually moving towards the > ability for larger things than an individual field to be 'const'. More > generally, Java has slowly been glomming on some of the goodness from > functional programming, including making it easier to declare and work > with immutable data. > > This all begs the question: why not take this idea to its logical > conclusion? And while we're at it, make the capability fully dynamic, > instead of limiting when you can 'freeze' something construction time? > > In other words, add the ability to "freeze" an object or array. If 'x' > is frozen, whatever 'x' directly references becomes no longer mutable. > > A rough sketch... > > Add new Freezable interface: > > ??? public interface Freezable { > ??? ??? boolean isFrozen(); > ??? ??? static boolean freeze(Freezable obj);?? // returns false if > already frozen > ??? } > > Arrays automatically implement Freezable (just like they do Cloneable) > > What about the memory model? Ideally it would work as if written like > this: > > ??? public class Foo implements Freezable { > ??????? private volatile frozen;??? // set to true by Freezable.freeze() > ??????? void mutateFooContent(Runnable mutation) { > ? ? ? ? ? ? if (this.frozen) > ? ? ? ? ? ? ? ? throw new FrozenObjectException(); > ? ? ? ? ? ? else > ??mutation.run(); > ? ? ? ? } > ??? } > > But there could be a better trade-off of performance vs. semantics. > > Other trade-offs... > > * (-) All mutations to a Freezable would require a new 'frozen' > check (* see below) > * (-) There would have to be a new bit allocated in the object header > * (+) Eliminate zillions of JDK defensive array copies (things like > String.toCharArray()) > * (+) JIT optimizations for constant-folding, etc. > * (+) GC optimizations > o (*) Put frozen objects into a read-only region of memory to > eliminate mutation checks > o Optimize scanning of frozen references (since they never change) > > I'm curious how other people think this idea would or wouldn't make > sense for Java & what's been decided in the past. > > Thanks, > -Archie > > -- > Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From eran at leshem.life Sat Dec 16 22:00:05 2023 From: eran at leshem.life (Eran Leshem) Date: Sun, 17 Dec 2023 00:00:05 +0200 Subject: Are canonical record constructor parameter names always available through reflection? Message-ID: <039101da306b$4514dcc0$cf3e9640$@leshem.life> Hi, I know that in general, method parameter names are only available at runtime if you specify the -parameters compiler option. From my testing, it seems like that's not the case with canonical record constructors, but rather that their parameter names are always available through reflection, regardless of -parameters. I couldn't find any documentation about this. Is it required by the spec, or is it just a side effect of these parameters being derived from record components? Thanks Eran -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.heidinga at oracle.com Mon Dec 18 15:04:12 2023 From: dan.heidinga at oracle.com (Dan Heidinga) Date: Mon, 18 Dec 2023 15:04:12 +0000 Subject: Frozen objects? In-Reply-To: References: Message-ID: Let me throw out one other concern: races. The invariant frozen objects want is that the application and runtime can trust they will never be mutated again. Unfortunately, if the object is published across threads before it is frozen, then that invariant is very difficult and expensive to maintain. If two threads, A & B, both have references to the object and thread A freezes it, B may still be publishing writes to it that A only observes later. To ensure the right JMM happens-before relationship for fields of Freezable objects, both reads and writes would need to be more expensive (volatile semantics?) until a thread could validate the object it was operating on was frozen. Freezing is not just a free set of unexplored optimizations. There?re also new costs associated with it across the runtime (field read/write, profiling, etc). --Dan From: amber-dev on behalf of Archie Cobbs Date: Saturday, December 16, 2023 at 12:33 PM To: amber-dev Subject: Frozen objects? Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': ? Granularity: o C: Any contiguous memory region that has a language name/identification o Java: At most 64 bits at a time (*) and arrays are not included o Advantage: C ? Enforcement: o C: Enforced only by the compiler (mostly) o Java: Enforced by the compiler and at runtime o Advantage: Java ? Dynamic Application: o C: Yes o Java: No o Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... ? (-) All mutations to a Freezable would require a new 'frozen' check (* see below) ? (-) There would have to be a new bit allocated in the object header ? (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) ? (+) JIT optimizations for constant-folding, etc. ? (+) GC optimizations o (*) Put frozen objects into a read-only region of memory to eliminate mutation checks o Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus at headcrashing.eu Tue Dec 19 08:03:40 2023 From: markus at headcrashing.eu (Markus Karg) Date: Tue, 19 Dec 2023 09:03:40 +0100 Subject: AW: Frozen objects? In-Reply-To: References: Message-ID: <007f01da3251$fde2a600$f9a7f200$@eu> I wonder why we discuss about freezing *objects* (which needs time) but not simply freezing *references* (like `const` does in C++)? -Markus Von: amber-dev [mailto:amber-dev-retn at openjdk.org] Im Auftrag von Dan Heidinga Gesendet: Montag, 18. Dezember 2023 16:04 An: Archie Cobbs; amber-dev Betreff: Re: Frozen objects? Let me throw out one other concern: races. The invariant frozen objects want is that the application and runtime can trust they will never be mutated again. Unfortunately, if the object is published across threads before it is frozen, then that invariant is very difficult and expensive to maintain. If two threads, A & B, both have references to the object and thread A freezes it, B may still be publishing writes to it that A only observes later. To ensure the right JMM happens-before relationship for fields of Freezable objects, both reads and writes would need to be more expensive (volatile semantics?) until a thread could validate the object it was operating on was frozen. Freezing is not just a free set of unexplored optimizations. There're also new costs associated with it across the runtime (field read/write, profiling, etc). --Dan From: amber-dev on behalf of Archie Cobbs Date: Saturday, December 16, 2023 at 12:33 PM To: amber-dev Subject: Frozen objects? Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': . Granularity: o C: Any contiguous memory region that has a language name/identification o Java: At most 64 bits at a time (*) and arrays are not included o Advantage: C . Enforcement: o C: Enforced only by the compiler (mostly) o Java: Enforced by the compiler and at runtime o Advantage: Java . Dynamic Application: o C: Yes o Java: No o Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... . (-) All mutations to a Freezable would require a new 'frozen' check (* see below) . (-) There would have to be a new bit allocated in the object header . (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) . (+) JIT optimizations for constant-folding, etc. . (+) GC optimizations o (*) Put frozen objects into a read-only region of memory to eliminate mutation checks o Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From holo3146 at gmail.com Tue Dec 19 13:17:57 2023 From: holo3146 at gmail.com (Holo The Sage Wolf) Date: Tue, 19 Dec 2023 15:17:57 +0200 Subject: Frozen objects? In-Reply-To: <007f01da3251$fde2a600$f9a7f200$@eu> References: <007f01da3251$fde2a600$f9a7f200$@eu> Message-ID: How do you freeze a memory region without talking about freezing objects? Unless your data is flat (so only value classes, primitives and arrays, 2 of which won't benefit from freezing) the only way to have freezing something that is enforced at compile time you must talk about objects. On Tue, 19 Dec 2023, 10:04 Markus Karg, wrote: > I wonder why we discuss about freezing *objects* (which needs time) but > not simply freezing *references* (like `const` does in C++)? > > -Markus > > > > > > *Von:* amber-dev [mailto:amber-dev-retn at openjdk.org] *Im Auftrag von *Dan > Heidinga > *Gesendet:* Montag, 18. Dezember 2023 16:04 > *An:* Archie Cobbs; amber-dev > *Betreff:* Re: Frozen objects? > > > > Let me throw out one other concern: races. The invariant frozen objects > want is that the application and runtime can trust they will never be > mutated again. Unfortunately, if the object is published across threads > before it is frozen, then that invariant is very difficult and expensive to > maintain. > > > > If two threads, A & B, both have references to the object and thread A > freezes it, B may still be publishing writes to it that A only observes > later. To ensure the right JMM happens-before relationship for fields of > Freezable objects, both reads and writes would need to be more expensive > (volatile semantics?) until a thread could validate the object it was > operating on was frozen. > > > > Freezing is not just a free set of unexplored optimizations. There?re > also new costs associated with it across the runtime (field read/write, > profiling, etc). > > > > --Dan > > > > *From: *amber-dev on behalf of Archie Cobbs < > archie.cobbs at gmail.com> > *Date: *Saturday, December 16, 2023 at 12:33 PM > *To: *amber-dev > *Subject: *Frozen objects? > > Caveat: I'm just trying to educate myself on what's been discussed in the > past, not actually suggest a new language feature. I'm sure this kind of > idea has been discussed before so feel free to point me at some previous > thread, etc. > > > > In C we have 'const' which essentially means "the memory allocated to this > thing is immutable". The nice thing about 'const' is that it can apply to > an individual variable or field in a structure, or it can apply to an > entire C structure or C array. In effect it applies to any contiguous > memory region that can be named/identified at the language level. > > > > On the other hand, it's just a language fiction, i.e., it can always be > defeated at runtime by casting (except for static constants). > > > > In Java we have 'final' which (in part) is like 'const' for fields and > variables, but unlike C 'final' can't be applied to larger memory regions > like entire objects or entire arrays. > > > > In C, 'const' can be applied "dynamically" in the sense I can cast foo to > const foo. Of course, this is only enforced at the language level. > > > > Summary of differences between C 'const' and Java 'final': > > ? Granularity: > > o C: Any contiguous memory region that has a language > name/identification > > o Java: At most 64 bits at a time (*) and arrays are not included > > o Advantage: C > > ? Enforcement: > > o C: Enforced only by the compiler (mostly) > > o Java: Enforced by the compiler and at runtime > > o Advantage: Java > > ? Dynamic Application: > > o C: Yes > > o Java: No > > o Advantage: C > > (*) With records and value objects we are gradually moving towards the > ability for larger things than an individual field to be 'const'. More > generally, Java has slowly been glomming on some of the goodness from > functional programming, including making it easier to declare and work with > immutable data. > > > > This all begs the question: why not take this idea to its logical > conclusion? And while we're at it, make the capability fully dynamic, > instead of limiting when you can 'freeze' something construction time? > > > > In other words, add the ability to "freeze" an object or array. If 'x' is > frozen, whatever 'x' directly references becomes no longer mutable. > > > > A rough sketch... > > > > Add new Freezable interface: > > > > public interface Freezable { > > boolean isFrozen(); > > static boolean freeze(Freezable obj); // returns false if > already frozen > > } > > > > Arrays automatically implement Freezable (just like they do Cloneable) > > > > What about the memory model? Ideally it would work as if written like this: > > > > public class Foo implements Freezable { > > private volatile frozen; // set to true by Freezable.freeze() > > void mutateFooContent(Runnable mutation) { > > if (this.frozen) > > throw new FrozenObjectException(); > > else > > mutation.run(); > > } > > } > > > > But there could be a better trade-off of performance vs. semantics. > > > > Other trade-offs... > > ? (-) All mutations to a Freezable would require a new 'frozen' > check (* see below) > > ? (-) There would have to be a new bit allocated in the object > header > > ? (+) Eliminate zillions of JDK defensive array copies (things > like String.toCharArray()) > > ? (+) JIT optimizations for constant-folding, etc. > > ? (+) GC optimizations > > o (*) Put frozen objects into a read-only region of memory to > eliminate mutation checks > > o Optimize scanning of frozen references (since they never change) > > I'm curious how other people think this idea would or wouldn't make sense > for Java & what's been decided in the past. > > > > Thanks, > > -Archie > > > > -- > > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holo3146 at gmail.com Tue Dec 19 13:31:55 2023 From: holo3146 at gmail.com (Holo The Sage Wolf) Date: Tue, 19 Dec 2023 15:31:55 +0200 Subject: Frozen objects? In-Reply-To: References: Message-ID: I think that indeed the main concern for the run time is races, but even ignoring races, there is (almost*) no real to make the compiler know when there is no illegal access to the values. I do not think that introducing a language level/runtime mechanism that has a consequences on what is a valid code without a real way enforce it is good. *Like Brian G. Said, final fields has a specific time in their lifecycle where they can be modified which does allow the compiler to do it's job, the only way (I can think of) to have a user code at an enforced lifecycle currently is through passing scopes (/continuations/lambdas/however you want to call them) to the said object, and have the object implement a mechanism to run these scopes. I'll advocate for this every time I can, I believe that we should start a conversation about designing a way to have a better control at the lifecycle of an object. (E.g. enhanced try-with-resources of sort) On Tue, 19 Dec 2023, 01:56 Dan Heidinga, wrote: > Let me throw out one other concern: races. The invariant frozen objects > want is that the application and runtime can trust they will never be > mutated again. Unfortunately, if the object is published across threads > before it is frozen, then that invariant is very difficult and expensive to > maintain. > > > > If two threads, A & B, both have references to the object and thread A > freezes it, B may still be publishing writes to it that A only observes > later. To ensure the right JMM happens-before relationship for fields of > Freezable objects, both reads and writes would need to be more expensive > (volatile semantics?) until a thread could validate the object it was > operating on was frozen. > > > > Freezing is not just a free set of unexplored optimizations. There?re > also new costs associated with it across the runtime (field read/write, > profiling, etc). > > > > --Dan > > > > *From: *amber-dev on behalf of Archie Cobbs < > archie.cobbs at gmail.com> > *Date: *Saturday, December 16, 2023 at 12:33 PM > *To: *amber-dev > *Subject: *Frozen objects? > > Caveat: I'm just trying to educate myself on what's been discussed in the > past, not actually suggest a new language feature. I'm sure this kind of > idea has been discussed before so feel free to point me at some previous > thread, etc. > > > > In C we have 'const' which essentially means "the memory allocated to this > thing is immutable". The nice thing about 'const' is that it can apply to > an individual variable or field in a structure, or it can apply to an > entire C structure or C array. In effect it applies to any contiguous > memory region that can be named/identified at the language level. > > > > On the other hand, it's just a language fiction, i.e., it can always be > defeated at runtime by casting (except for static constants). > > > > In Java we have 'final' which (in part) is like 'const' for fields and > variables, but unlike C 'final' can't be applied to larger memory regions > like entire objects or entire arrays. > > > > In C, 'const' can be applied "dynamically" in the sense I can cast foo to > const foo. Of course, this is only enforced at the language level. > > > > Summary of differences between C 'const' and Java 'final': > > ? Granularity: > > o C: Any contiguous memory region that has a language > name/identification > > o Java: At most 64 bits at a time (*) and arrays are not included > > o Advantage: C > > ? Enforcement: > > o C: Enforced only by the compiler (mostly) > > o Java: Enforced by the compiler and at runtime > > o Advantage: Java > > ? Dynamic Application: > > o C: Yes > > o Java: No > > o Advantage: C > > (*) With records and value objects we are gradually moving towards the > ability for larger things than an individual field to be 'const'. More > generally, Java has slowly been glomming on some of the goodness from > functional programming, including making it easier to declare and work with > immutable data. > > > > This all begs the question: why not take this idea to its logical > conclusion? And while we're at it, make the capability fully dynamic, > instead of limiting when you can 'freeze' something construction time? > > > > In other words, add the ability to "freeze" an object or array. If 'x' is > frozen, whatever 'x' directly references becomes no longer mutable. > > > > A rough sketch... > > > > Add new Freezable interface: > > > > public interface Freezable { > > boolean isFrozen(); > > static boolean freeze(Freezable obj); // returns false if > already frozen > > } > > > > Arrays automatically implement Freezable (just like they do Cloneable) > > > > What about the memory model? Ideally it would work as if written like this: > > > > public class Foo implements Freezable { > > private volatile frozen; // set to true by Freezable.freeze() > > void mutateFooContent(Runnable mutation) { > > if (this.frozen) > > throw new FrozenObjectException(); > > else > > mutation.run(); > > } > > } > > > > But there could be a better trade-off of performance vs. semantics. > > > > Other trade-offs... > > ? (-) All mutations to a Freezable would require a new 'frozen' > check (* see below) > > ? (-) There would have to be a new bit allocated in the object > header > > ? (+) Eliminate zillions of JDK defensive array copies (things > like String.toCharArray()) > > ? (+) JIT optimizations for constant-folding, etc. > > ? (+) GC optimizations > > o (*) Put frozen objects into a read-only region of memory to > eliminate mutation checks > > o Optimize scanning of frozen references (since they never change) > > I'm curious how other people think this idea would or wouldn't make sense > for Java & what's been decided in the past. > > > > Thanks, > > -Archie > > > > -- > > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus at headcrashing.eu Wed Dec 20 10:31:33 2023 From: markus at headcrashing.eu (Markus Karg) Date: Wed, 20 Dec 2023 11:31:33 +0100 Subject: AW: Frozen objects? In-Reply-To: References: <007f01da3251$fde2a600$f9a7f200$@eu> Message-ID: <023901da332f$c0e19660$42a4c320$@eu> C++ ("const") does not freeze the memory region at all, and it does not need to (and hence is quite fast at runtime as it does not even need to check the access). The compiler simply rejects to compile the attempt to write via read-only references. That would be sufficient for most cases. Freezing objects is a different idea, and only needed in side cases. So I would plea for introducing compile-time read-only references first, as it is the lower hanging fruit. -Markus Von: Holo The Sage Wolf [mailto:holo3146 at gmail.com] Gesendet: Dienstag, 19. Dezember 2023 14:18 An: Markus Karg Cc: Dan Heidinga; Archie Cobbs; amber-dev Betreff: Re: Frozen objects? How do you freeze a memory region without talking about freezing objects? Unless your data is flat (so only value classes, primitives and arrays, 2 of which won't benefit from freezing) the only way to have freezing something that is enforced at compile time you must talk about objects. On Tue, 19 Dec 2023, 10:04 Markus Karg, wrote: I wonder why we discuss about freezing *objects* (which needs time) but not simply freezing *references* (like `const` does in C++)? -Markus Von: amber-dev [mailto:amber-dev-retn at openjdk.org] Im Auftrag von Dan Heidinga Gesendet: Montag, 18. Dezember 2023 16:04 An: Archie Cobbs; amber-dev Betreff: Re: Frozen objects? Let me throw out one other concern: races. The invariant frozen objects want is that the application and runtime can trust they will never be mutated again. Unfortunately, if the object is published across threads before it is frozen, then that invariant is very difficult and expensive to maintain. If two threads, A & B, both have references to the object and thread A freezes it, B may still be publishing writes to it that A only observes later. To ensure the right JMM happens-before relationship for fields of Freezable objects, both reads and writes would need to be more expensive (volatile semantics?) until a thread could validate the object it was operating on was frozen. Freezing is not just a free set of unexplored optimizations. There?re also new costs associated with it across the runtime (field read/write, profiling, etc). --Dan From: amber-dev on behalf of Archie Cobbs Date: Saturday, December 16, 2023 at 12:33 PM To: amber-dev Subject: Frozen objects? Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': ? Granularity: o C: Any contiguous memory region that has a language name/identification o Java: At most 64 bits at a time (*) and arrays are not included o Advantage: C ? Enforcement: o C: Enforced only by the compiler (mostly) o Java: Enforced by the compiler and at runtime o Advantage: Java ? Dynamic Application: o C: Yes o Java: No o Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... ? (-) All mutations to a Freezable would require a new 'frozen' check (* see below) ? (-) There would have to be a new bit allocated in the object header ? (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) ? (+) JIT optimizations for constant-folding, etc. ? (+) GC optimizations o (*) Put frozen objects into a read-only region of memory to eliminate mutation checks o Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.heidinga at oracle.com Wed Dec 20 16:58:01 2023 From: dan.heidinga at oracle.com (Dan Heidinga) Date: Wed, 20 Dec 2023 16:58:01 +0000 Subject: [External] : AW: Frozen objects? In-Reply-To: <023901da332f$c0e19660$42a4c320$@eu> References: <007f01da3251$fde2a600$f9a7f200$@eu> <023901da332f$c0e19660$42a4c320$@eu> Message-ID: C++ ?const? is a bit of a mess. It?s not only a local property that prevents writes to the reference; it?s also a viral property that infects the type system. Instead of dealing with a single type (?X?), we now have two (?X?, ?const X?) with a 1-way conversion from ?X -> const X? but no conversion back (Let?s not talk about const_cast?s undefined behaviour?.). Now methods need to be defined to take either an X or a const X parameter and need to flow the const through to all their callers. But that?s not all ? now we need to be able to mark virtual methods to declare if the receiver is const or not. And to mark return types as const or not. There?s a pretty massive cost to the user?s mental model and to the language as well as producing on-going compatibility problems (is adding or removing ?const? modifiers binary compatible? Source compatible?) for library evolution. Syntactic sugar to indicate ?I won?t write to this? doesn?t really pay its way. The costs are quite high. From: Markus Karg Date: Wednesday, December 20, 2023 at 5:32 AM To: 'Holo The Sage Wolf' Cc: Dan Heidinga , 'Archie Cobbs' , 'amber-dev' Subject: [External] : AW: Frozen objects? C++ ("const") does not freeze the memory region at all, and it does not need to (and hence is quite fast at runtime as it does not even need to check the access). The compiler simply rejects to compile the attempt to write via read-only references. That would be sufficient for most cases. Freezing objects is a different idea, and only needed in side cases. So I would plea for introducing compile-time read-only references first, as it is the lower hanging fruit. -Markus Von: Holo The Sage Wolf [mailto:holo3146 at gmail.com] Gesendet: Dienstag, 19. Dezember 2023 14:18 An: Markus Karg Cc: Dan Heidinga; Archie Cobbs; amber-dev Betreff: Re: Frozen objects? How do you freeze a memory region without talking about freezing objects? Unless your data is flat (so only value classes, primitives and arrays, 2 of which won't benefit from freezing) the only way to have freezing something that is enforced at compile time you must talk about objects. On Tue, 19 Dec 2023, 10:04 Markus Karg, > wrote: I wonder why we discuss about freezing *objects* (which needs time) but not simply freezing *references* (like `const` does in C++)? -Markus Von: amber-dev [mailto:amber-dev-retn at openjdk.org] Im Auftrag von Dan Heidinga Gesendet: Montag, 18. Dezember 2023 16:04 An: Archie Cobbs; amber-dev Betreff: Re: Frozen objects? Let me throw out one other concern: races. The invariant frozen objects want is that the application and runtime can trust they will never be mutated again. Unfortunately, if the object is published across threads before it is frozen, then that invariant is very difficult and expensive to maintain. If two threads, A & B, both have references to the object and thread A freezes it, B may still be publishing writes to it that A only observes later. To ensure the right JMM happens-before relationship for fields of Freezable objects, both reads and writes would need to be more expensive (volatile semantics?) until a thread could validate the object it was operating on was frozen. Freezing is not just a free set of unexplored optimizations. There?re also new costs associated with it across the runtime (field read/write, profiling, etc). --Dan From: amber-dev > on behalf of Archie Cobbs > Date: Saturday, December 16, 2023 at 12:33 PM To: amber-dev > Subject: Frozen objects? Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': ? Granularity: o C: Any contiguous memory region that has a language name/identification o Java: At most 64 bits at a time (*) and arrays are not included o Advantage: C ? Enforcement: o C: Enforced only by the compiler (mostly) o Java: Enforced by the compiler and at runtime o Advantage: Java ? Dynamic Application: o C: Yes o Java: No o Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... ? (-) All mutations to a Freezable would require a new 'frozen' check (* see below) ? (-) There would have to be a new bit allocated in the object header ? (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) ? (+) JIT optimizations for constant-folding, etc. ? (+) GC optimizations o (*) Put frozen objects into a read-only region of memory to eliminate mutation checks o Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From redio.development at gmail.com Thu Dec 21 15:05:33 2023 From: redio.development at gmail.com (Red IO) Date: Thu, 21 Dec 2023 16:05:33 +0100 Subject: [External] : AW: Frozen objects? In-Reply-To: References: <007f01da3251$fde2a600$f9a7f200$@eu> <023901da332f$c0e19660$42a4c320$@eu> Message-ID: I think const is a pretty fine concept, the confusion in c++ primarily comes from it's confusing syntax and having multiple meanings like const member functions. A conversion from const to non const just makes no sense. You can use every non const object like a const one but never the other way around. I prefer the inverted rust equivalent "mut" more as it makes the point more clear. If you share a mutable reference you expect the recipient to mutate your data, if you pass an immutable reference you can be ensured the recipient won't change your data. It's just a contract rather or not some value can be mutated and rather or not a method requires to mutate it's parameters. In java we currently are stuck with exception throwing views, documentation and defensive copies. I'm not sure rather adding an internal mutability system afterwards is possible or a good idea. Especially old libraries would require some sort of const cast to be usable. Which would undermine the certainty such a system provides. Best regards RedIODev On Wed, Dec 20, 2023, 17:58 Dan Heidinga wrote: > C++ ?const? is a bit of a mess. It?s not only a local property that > prevents writes to the reference; it?s also a viral property that infects > the type system. Instead of dealing with a single type (?X?), we now have > two (?X?, ?const X?) with a 1-way conversion from ?X -> const X? but no > conversion back (Let?s not talk about const_cast?s undefined behaviour?.). > Now methods need to be defined to take either an X or a const X parameter > and need to flow the const through to all their callers. > > > > But that?s not all ? now we need to be able to mark virtual methods to > declare if the receiver is const or not. And to mark return types as const > or not. > > > > There?s a pretty massive cost to the user?s mental model and to the > language as well as producing on-going compatibility problems (is adding or > removing ?const? modifiers binary compatible? Source compatible?) for > library evolution. > > > > Syntactic sugar to indicate ?I won?t write to this? doesn?t really pay its > way. The costs are quite high. > > > > *From: *Markus Karg > *Date: *Wednesday, December 20, 2023 at 5:32 AM > *To: *'Holo The Sage Wolf' > *Cc: *Dan Heidinga , 'Archie Cobbs' < > archie.cobbs at gmail.com>, 'amber-dev' > *Subject: *[External] : AW: Frozen objects? > > C++ ("const") does not freeze the memory region at all, and it does not > need to (and hence is quite fast at runtime as it does not even need to > check the access). The compiler simply rejects to compile the attempt to > write via read-only references. That would be sufficient for most cases. > Freezing objects is a different idea, and only needed in side cases. So I > would plea for introducing compile-time read-only references first, as it > is the lower hanging fruit. > > -Markus > > > > > > *Von:* Holo The Sage Wolf [mailto:holo3146 at gmail.com] > *Gesendet:* Dienstag, 19. Dezember 2023 14:18 > *An:* Markus Karg > *Cc:* Dan Heidinga; Archie Cobbs; amber-dev > *Betreff:* Re: Frozen objects? > > > > How do you freeze a memory region without talking about freezing objects? > > > > Unless your data is flat (so only value classes, primitives and arrays, 2 > of which won't benefit from freezing) the only way to have freezing > something that is enforced at compile time you must talk about objects. > > On Tue, 19 Dec 2023, 10:04 Markus Karg, wrote: > > I wonder why we discuss about freezing *objects* (which needs time) but > not simply freezing *references* (like `const` does in C++)? > > -Markus > > > > > > *Von:* amber-dev [mailto:amber-dev-retn at openjdk.org] *Im Auftrag von *Dan > Heidinga > *Gesendet:* Montag, 18. Dezember 2023 16:04 > *An:* Archie Cobbs; amber-dev > *Betreff:* Re: Frozen objects? > > > > Let me throw out one other concern: races. The invariant frozen objects > want is that the application and runtime can trust they will never be > mutated again. Unfortunately, if the object is published across threads > before it is frozen, then that invariant is very difficult and expensive to > maintain. > > > > If two threads, A & B, both have references to the object and thread A > freezes it, B may still be publishing writes to it that A only observes > later. To ensure the right JMM happens-before relationship for fields of > Freezable objects, both reads and writes would need to be more expensive > (volatile semantics?) until a thread could validate the object it was > operating on was frozen. > > > > Freezing is not just a free set of unexplored optimizations. There?re > also new costs associated with it across the runtime (field read/write, > profiling, etc). > > > > --Dan > > > > *From: *amber-dev on behalf of Archie Cobbs < > archie.cobbs at gmail.com> > *Date: *Saturday, December 16, 2023 at 12:33 PM > *To: *amber-dev > *Subject: *Frozen objects? > > Caveat: I'm just trying to educate myself on what's been discussed in the > past, not actually suggest a new language feature. I'm sure this kind of > idea has been discussed before so feel free to point me at some previous > thread, etc. > > > > In C we have 'const' which essentially means "the memory allocated to this > thing is immutable". The nice thing about 'const' is that it can apply to > an individual variable or field in a structure, or it can apply to an > entire C structure or C array. In effect it applies to any contiguous > memory region that can be named/identified at the language level. > > > > On the other hand, it's just a language fiction, i.e., it can always be > defeated at runtime by casting (except for static constants). > > > > In Java we have 'final' which (in part) is like 'const' for fields and > variables, but unlike C 'final' can't be applied to larger memory regions > like entire objects or entire arrays. > > > > In C, 'const' can be applied "dynamically" in the sense I can cast foo to > const foo. Of course, this is only enforced at the language level. > > > > Summary of differences between C 'const' and Java 'final': > > ? Granularity: > > o C: Any contiguous memory region that has a language > name/identification > > o Java: At most 64 bits at a time (*) and arrays are not included > > o Advantage: C > > ? Enforcement: > > o C: Enforced only by the compiler (mostly) > > o Java: Enforced by the compiler and at runtime > > o Advantage: Java > > ? Dynamic Application: > > o C: Yes > > o Java: No > > o Advantage: C > > (*) With records and value objects we are gradually moving towards the > ability for larger things than an individual field to be 'const'. More > generally, Java has slowly been glomming on some of the goodness from > functional programming, including making it easier to declare and work with > immutable data. > > > > This all begs the question: why not take this idea to its logical > conclusion? And while we're at it, make the capability fully dynamic, > instead of limiting when you can 'freeze' something construction time? > > > > In other words, add the ability to "freeze" an object or array. If 'x' is > frozen, whatever 'x' directly references becomes no longer mutable. > > > > A rough sketch... > > > > Add new Freezable interface: > > > > public interface Freezable { > > boolean isFrozen(); > > static boolean freeze(Freezable obj); // returns false if > already frozen > > } > > > > Arrays automatically implement Freezable (just like they do Cloneable) > > > > What about the memory model? Ideally it would work as if written like this: > > > > public class Foo implements Freezable { > > private volatile frozen; // set to true by Freezable.freeze() > > void mutateFooContent(Runnable mutation) { > > if (this.frozen) > > throw new FrozenObjectException(); > > else > > mutation.run(); > > } > > } > > > > But there could be a better trade-off of performance vs. semantics. > > > > Other trade-offs... > > ? (-) All mutations to a Freezable would require a new 'frozen' > check (* see below) > > ? (-) There would have to be a new bit allocated in the object > header > > ? (+) Eliminate zillions of JDK defensive array copies (things > like String.toCharArray()) > > ? (+) JIT optimizations for constant-folding, etc. > > ? (+) GC optimizations > > o (*) Put frozen objects into a read-only region of memory to > eliminate mutation checks > > o Optimize scanning of frozen references (since they never change) > > I'm curious how other people think this idea would or wouldn't make sense > for Java & what's been decided in the past. > > > > Thanks, > > -Archie > > > > -- > > Archie L. Cobbs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus at headcrashing.eu Thu Dec 21 15:51:51 2023 From: markus at headcrashing.eu (Markus Karg) Date: Thu, 21 Dec 2023 16:51:51 +0100 Subject: AW: [External] : AW: Frozen objects? In-Reply-To: References: <007f01da3251$fde2a600$f9a7f200$@eu> <023901da332f$c0e19660$42a4c320$@eu> Message-ID: <004201da3425$9d43c6f0$d7cb54d0$@eu> You are right, backwards compatibility without introducing const_cast as in C++ is a problem. But that does not neither man that we MUST introduce const_cast nor that the problem is not solvable. -Markus Von: Red IO [mailto:redio.development at gmail.com] Gesendet: Donnerstag, 21. Dezember 2023 16:06 An: Dan Heidinga Cc: Markus Karg; Holo The Sage Wolf; Archie Cobbs; amber-dev Betreff: Re: [External] : AW: Frozen objects? I think const is a pretty fine concept, the confusion in c++ primarily comes from it's confusing syntax and having multiple meanings like const member functions. A conversion from const to non const just makes no sense. You can use every non const object like a const one but never the other way around. I prefer the inverted rust equivalent "mut" more as it makes the point more clear. If you share a mutable reference you expect the recipient to mutate your data, if you pass an immutable reference you can be ensured the recipient won't change your data. It's just a contract rather or not some value can be mutated and rather or not a method requires to mutate it's parameters. In java we currently are stuck with exception throwing views, documentation and defensive copies. I'm not sure rather adding an internal mutability system afterwards is possible or a good idea. Especially old libraries would require some sort of const cast to be usable. Which would undermine the certainty such a system provides. Best regards RedIODev On Wed, Dec 20, 2023, 17:58 Dan Heidinga wrote: C++ ?const? is a bit of a mess. It?s not only a local property that prevents writes to the reference; it?s also a viral property that infects the type system. Instead of dealing with a single type (?X?), we now have two (?X?, ?const X?) with a 1-way conversion from ?X -> const X? but no conversion back (Let?s not talk about const_cast?s undefined behaviour?.). Now methods need to be defined to take either an X or a const X parameter and need to flow the const through to all their callers. But that?s not all ? now we need to be able to mark virtual methods to declare if the receiver is const or not. And to mark return types as const or not. There?s a pretty massive cost to the user?s mental model and to the language as well as producing on-going compatibility problems (is adding or removing ?const? modifiers binary compatible? Source compatible?) for library evolution. Syntactic sugar to indicate ?I won?t write to this? doesn?t really pay its way. The costs are quite high. From: Markus Karg Date: Wednesday, December 20, 2023 at 5:32 AM To: 'Holo The Sage Wolf' Cc: Dan Heidinga , 'Archie Cobbs' , 'amber-dev' Subject: [External] : AW: Frozen objects? C++ ("const") does not freeze the memory region at all, and it does not need to (and hence is quite fast at runtime as it does not even need to check the access). The compiler simply rejects to compile the attempt to write via read-only references. That would be sufficient for most cases. Freezing objects is a different idea, and only needed in side cases. So I would plea for introducing compile-time read-only references first, as it is the lower hanging fruit. -Markus Von: Holo The Sage Wolf [mailto:holo3146 at gmail.com] Gesendet: Dienstag, 19. Dezember 2023 14:18 An: Markus Karg Cc: Dan Heidinga; Archie Cobbs; amber-dev Betreff: Re: Frozen objects? How do you freeze a memory region without talking about freezing objects? Unless your data is flat (so only value classes, primitives and arrays, 2 of which won't benefit from freezing) the only way to have freezing something that is enforced at compile time you must talk about objects. On Tue, 19 Dec 2023, 10:04 Markus Karg, < markus at headcrashing.eu> wrote: I wonder why we discuss about freezing *objects* (which needs time) but not simply freezing *references* (like `const` does in C++)? -Markus Von: amber-dev [mailto: amber-dev-retn at openjdk.org] Im Auftrag von Dan Heidinga Gesendet: Montag, 18. Dezember 2023 16:04 An: Archie Cobbs; amber-dev Betreff: Re: Frozen objects? Let me throw out one other concern: races. The invariant frozen objects want is that the application and runtime can trust they will never be mutated again. Unfortunately, if the object is published across threads before it is frozen, then that invariant is very difficult and expensive to maintain. If two threads, A & B, both have references to the object and thread A freezes it, B may still be publishing writes to it that A only observes later. To ensure the right JMM happens-before relationship for fields of Freezable objects, both reads and writes would need to be more expensive (volatile semantics?) until a thread could validate the object it was operating on was frozen. Freezing is not just a free set of unexplored optimizations. There?re also new costs associated with it across the runtime (field read/write, profiling, etc). --Dan From: amber-dev on behalf of Archie Cobbs Date: Saturday, December 16, 2023 at 12:33 PM To: amber-dev Subject: Frozen objects? Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': ? Granularity: o C: Any contiguous memory region that has a language name/identification o Java: At most 64 bits at a time (*) and arrays are not included o Advantage: C ? Enforcement: o C: Enforced only by the compiler (mostly) o Java: Enforced by the compiler and at runtime o Advantage: Java ? Dynamic Application: o C: Yes o Java: No o Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... ? (-) All mutations to a Freezable would require a new 'frozen' check (* see below) ? (-) There would have to be a new bit allocated in the object header ? (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) ? (+) JIT optimizations for constant-folding, etc. ? (+) GC optimizations o (*) Put frozen objects into a read-only region of memory to eliminate mutation checks o Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.heidinga at oracle.com Thu Dec 21 16:08:59 2023 From: dan.heidinga at oracle.com (Dan Heidinga) Date: Thu, 21 Dec 2023 16:08:59 +0000 Subject: [External] : AW: Frozen objects? In-Reply-To: <004201da3425$9d43c6f0$d7cb54d0$@eu> References: <007f01da3251$fde2a600$f9a7f200$@eu> <023901da332f$c0e19660$42a4c320$@eu> <004201da3425$9d43c6f0$d7cb54d0$@eu> Message-ID: Not unsolvable but the payoff isn?t there. None of this is the hard work of figuring out what ?const? would mean in the language, how it would fit with return types, method parameters, receivers, conversions, annotations, reflection, methodhandles, debuggers and any other number of existing features in Java. If ?const? is something you really want to see added to Java, then spend the time to work through the semantics and bring a proposal with enough details worked out that it can be discussed and debated on its merit. From: Markus Karg Date: Thursday, December 21, 2023 at 10:52 AM To: 'Red IO' , Dan Heidinga Cc: 'Holo The Sage Wolf' , 'Archie Cobbs' , 'amber-dev' Subject: AW: [External] : AW: Frozen objects? You are right, backwards compatibility without introducing const_cast as in C++ is a problem. But that does not neither man that we MUST introduce const_cast nor that the problem is not solvable. -Markus Von: Red IO [mailto:redio.development at gmail.com] Gesendet: Donnerstag, 21. Dezember 2023 16:06 An: Dan Heidinga Cc: Markus Karg; Holo The Sage Wolf; Archie Cobbs; amber-dev Betreff: Re: [External] : AW: Frozen objects? I think const is a pretty fine concept, the confusion in c++ primarily comes from it's confusing syntax and having multiple meanings like const member functions. A conversion from const to non const just makes no sense. You can use every non const object like a const one but never the other way around. I prefer the inverted rust equivalent "mut" more as it makes the point more clear. If you share a mutable reference you expect the recipient to mutate your data, if you pass an immutable reference you can be ensured the recipient won't change your data. It's just a contract rather or not some value can be mutated and rather or not a method requires to mutate it's parameters. In java we currently are stuck with exception throwing views, documentation and defensive copies. I'm not sure rather adding an internal mutability system afterwards is possible or a good idea. Especially old libraries would require some sort of const cast to be usable. Which would undermine the certainty such a system provides. Best regards RedIODev On Wed, Dec 20, 2023, 17:58 Dan Heidinga > wrote: C++ ?const? is a bit of a mess. It?s not only a local property that prevents writes to the reference; it?s also a viral property that infects the type system. Instead of dealing with a single type (?X?), we now have two (?X?, ?const X?) with a 1-way conversion from ?X -> const X? but no conversion back (Let?s not talk about const_cast?s undefined behaviour?.). Now methods need to be defined to take either an X or a const X parameter and need to flow the const through to all their callers. But that?s not all ? now we need to be able to mark virtual methods to declare if the receiver is const or not. And to mark return types as const or not. There?s a pretty massive cost to the user?s mental model and to the language as well as producing on-going compatibility problems (is adding or removing ?const? modifiers binary compatible? Source compatible?) for library evolution. Syntactic sugar to indicate ?I won?t write to this? doesn?t really pay its way. The costs are quite high. From: Markus Karg > Date: Wednesday, December 20, 2023 at 5:32 AM To: 'Holo The Sage Wolf' > Cc: Dan Heidinga >, 'Archie Cobbs' >, 'amber-dev' > Subject: [External] : AW: Frozen objects? C++ ("const") does not freeze the memory region at all, and it does not need to (and hence is quite fast at runtime as it does not even need to check the access). The compiler simply rejects to compile the attempt to write via read-only references. That would be sufficient for most cases. Freezing objects is a different idea, and only needed in side cases. So I would plea for introducing compile-time read-only references first, as it is the lower hanging fruit. -Markus Von: Holo The Sage Wolf [mailto:holo3146 at gmail.com] Gesendet: Dienstag, 19. Dezember 2023 14:18 An: Markus Karg Cc: Dan Heidinga; Archie Cobbs; amber-dev Betreff: Re: Frozen objects? How do you freeze a memory region without talking about freezing objects? Unless your data is flat (so only value classes, primitives and arrays, 2 of which won't benefit from freezing) the only way to have freezing something that is enforced at compile time you must talk about objects. On Tue, 19 Dec 2023, 10:04 Markus Karg, > wrote: I wonder why we discuss about freezing *objects* (which needs time) but not simply freezing *references* (like `const` does in C++)? -Markus Von: amber-dev [mailto:amber-dev-retn at openjdk.org] Im Auftrag von Dan Heidinga Gesendet: Montag, 18. Dezember 2023 16:04 An: Archie Cobbs; amber-dev Betreff: Re: Frozen objects? Let me throw out one other concern: races. The invariant frozen objects want is that the application and runtime can trust they will never be mutated again. Unfortunately, if the object is published across threads before it is frozen, then that invariant is very difficult and expensive to maintain. If two threads, A & B, both have references to the object and thread A freezes it, B may still be publishing writes to it that A only observes later. To ensure the right JMM happens-before relationship for fields of Freezable objects, both reads and writes would need to be more expensive (volatile semantics?) until a thread could validate the object it was operating on was frozen. Freezing is not just a free set of unexplored optimizations. There?re also new costs associated with it across the runtime (field read/write, profiling, etc). --Dan From: amber-dev > on behalf of Archie Cobbs > Date: Saturday, December 16, 2023 at 12:33 PM To: amber-dev > Subject: Frozen objects? Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': ? Granularity: o C: Any contiguous memory region that has a language name/identification o Java: At most 64 bits at a time (*) and arrays are not included o Advantage: C ? Enforcement: o C: Enforced only by the compiler (mostly) o Java: Enforced by the compiler and at runtime o Advantage: Java ? Dynamic Application: o C: Yes o Java: No o Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... ? (-) All mutations to a Freezable would require a new 'frozen' check (* see below) ? (-) There would have to be a new bit allocated in the object header ? (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) ? (+) JIT optimizations for constant-folding, etc. ? (+) GC optimizations o (*) Put frozen objects into a read-only region of memory to eliminate mutation checks o Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus at headcrashing.eu Thu Dec 21 16:23:01 2023 From: markus at headcrashing.eu (Markus Karg) Date: Thu, 21 Dec 2023 17:23:01 +0100 Subject: AW: [External] : AW: Frozen objects? In-Reply-To: References: <007f01da3251$fde2a600$f9a7f200$@eu> <023901da332f$c0e19660$42a4c320$@eu> <004201da3425$9d43c6f0$d7cb54d0$@eu> Message-ID: <006701da3429$f7cea730$e76bf590$@eu> So you think frozen objects will be any simpler? -Markus Von: Dan Heidinga [mailto:dan.heidinga at oracle.com] Gesendet: Donnerstag, 21. Dezember 2023 17:09 An: Markus Karg; 'Red IO' Cc: 'Holo The Sage Wolf'; 'Archie Cobbs'; 'amber-dev' Betreff: Re: [External] : AW: Frozen objects? Not unsolvable but the payoff isn't there. None of this is the hard work of figuring out what "const" would mean in the language, how it would fit with return types, method parameters, receivers, conversions, annotations, reflection, methodhandles, debuggers and any other number of existing features in Java. If "const" is something you really want to see added to Java, then spend the time to work through the semantics and bring a proposal with enough details worked out that it can be discussed and debated on its merit. From: Markus Karg Date: Thursday, December 21, 2023 at 10:52 AM To: 'Red IO' , Dan Heidinga Cc: 'Holo The Sage Wolf' , 'Archie Cobbs' , 'amber-dev' Subject: AW: [External] : AW: Frozen objects? You are right, backwards compatibility without introducing const_cast as in C++ is a problem. But that does not neither man that we MUST introduce const_cast nor that the problem is not solvable. -Markus Von: Red IO [mailto:redio.development at gmail.com] Gesendet: Donnerstag, 21. Dezember 2023 16:06 An: Dan Heidinga Cc: Markus Karg; Holo The Sage Wolf; Archie Cobbs; amber-dev Betreff: Re: [External] : AW: Frozen objects? I think const is a pretty fine concept, the confusion in c++ primarily comes from it's confusing syntax and having multiple meanings like const member functions. A conversion from const to non const just makes no sense. You can use every non const object like a const one but never the other way around. I prefer the inverted rust equivalent "mut" more as it makes the point more clear. If you share a mutable reference you expect the recipient to mutate your data, if you pass an immutable reference you can be ensured the recipient won't change your data. It's just a contract rather or not some value can be mutated and rather or not a method requires to mutate it's parameters. In java we currently are stuck with exception throwing views, documentation and defensive copies. I'm not sure rather adding an internal mutability system afterwards is possible or a good idea. Especially old libraries would require some sort of const cast to be usable. Which would undermine the certainty such a system provides. Best regards RedIODev On Wed, Dec 20, 2023, 17:58 Dan Heidinga wrote: C++ "const" is a bit of a mess. It's not only a local property that prevents writes to the reference; it's also a viral property that infects the type system. Instead of dealing with a single type ("X"), we now have two ("X", "const X") with a 1-way conversion from "X -> const X" but no conversion back (Let's not talk about const_cast's undefined behaviour..). Now methods need to be defined to take either an X or a const X parameter and need to flow the const through to all their callers. But that's not all - now we need to be able to mark virtual methods to declare if the receiver is const or not. And to mark return types as const or not. There's a pretty massive cost to the user's mental model and to the language as well as producing on-going compatibility problems (is adding or removing "const" modifiers binary compatible? Source compatible?) for library evolution. Syntactic sugar to indicate "I won't write to this" doesn't really pay its way. The costs are quite high. From: Markus Karg Date: Wednesday, December 20, 2023 at 5:32 AM To: 'Holo The Sage Wolf' Cc: Dan Heidinga , 'Archie Cobbs' , 'amber-dev' Subject: [External] : AW: Frozen objects? C++ ("const") does not freeze the memory region at all, and it does not need to (and hence is quite fast at runtime as it does not even need to check the access). The compiler simply rejects to compile the attempt to write via read-only references. That would be sufficient for most cases. Freezing objects is a different idea, and only needed in side cases. So I would plea for introducing compile-time read-only references first, as it is the lower hanging fruit. -Markus Von: Holo The Sage Wolf [mailto:holo3146 at gmail.com] Gesendet: Dienstag, 19. Dezember 2023 14:18 An: Markus Karg Cc: Dan Heidinga; Archie Cobbs; amber-dev Betreff: Re: Frozen objects? How do you freeze a memory region without talking about freezing objects? Unless your data is flat (so only value classes, primitives and arrays, 2 of which won't benefit from freezing) the only way to have freezing something that is enforced at compile time you must talk about objects. On Tue, 19 Dec 2023, 10:04 Markus Karg, < markus at headcrashing.eu> wrote: I wonder why we discuss about freezing *objects* (which needs time) but not simply freezing *references* (like `const` does in C++)? -Markus Von: amber-dev [mailto: amber-dev-retn at openjdk.org] Im Auftrag von Dan Heidinga Gesendet: Montag, 18. Dezember 2023 16:04 An: Archie Cobbs; amber-dev Betreff: Re: Frozen objects? Let me throw out one other concern: races. The invariant frozen objects want is that the application and runtime can trust they will never be mutated again. Unfortunately, if the object is published across threads before it is frozen, then that invariant is very difficult and expensive to maintain. If two threads, A & B, both have references to the object and thread A freezes it, B may still be publishing writes to it that A only observes later. To ensure the right JMM happens-before relationship for fields of Freezable objects, both reads and writes would need to be more expensive (volatile semantics?) until a thread could validate the object it was operating on was frozen. Freezing is not just a free set of unexplored optimizations. There're also new costs associated with it across the runtime (field read/write, profiling, etc). --Dan From: amber-dev on behalf of Archie Cobbs Date: Saturday, December 16, 2023 at 12:33 PM To: amber-dev Subject: Frozen objects? Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': . Granularity: o C: Any contiguous memory region that has a language name/identification o Java: At most 64 bits at a time (*) and arrays are not included o Advantage: C . Enforcement: o C: Enforced only by the compiler (mostly) o Java: Enforced by the compiler and at runtime o Advantage: Java . Dynamic Application: o C: Yes o Java: No o Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... . (-) All mutations to a Freezable would require a new 'frozen' check (* see below) . (-) There would have to be a new bit allocated in the object header . (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) . (+) JIT optimizations for constant-folding, etc. . (+) GC optimizations o (*) Put frozen objects into a read-only region of memory to eliminate mutation checks o Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Thu Dec 21 16:27:15 2023 From: brian.goetz at oracle.com (Brian Goetz) Date: Thu, 21 Dec 2023 11:27:15 -0500 Subject: [External] : AW: Frozen objects? In-Reply-To: References: <007f01da3251$fde2a600$f9a7f200$@eu> <023901da332f$c0e19660$42a4c320$@eu> <004201da3425$9d43c6f0$d7cb54d0$@eu> Message-ID: Dan is 100% correct here; there's a reason we haven't pursued this, and it's not because we've never used C++.? It's not even a close call; it's an obvious loser. > So you think frozen objects will be any simpler? Dan never suggested frozen objects were a good idea either; someone asked a question about them, that's all.? And it's disingenuous to suggest that these are the only two options, or that if you don't like A, then you must like B. (As a rule of thumb: arguments that start with "So you think / so you're saying" are almost never what that person things or is saying.) In any case, we are way outside the charter of amber-dev here.? I think best to stop here. On 12/21/2023 11:08 AM, Dan Heidinga wrote: > > Not unsolvable but the payoff isn?t there. > > None of this is the hard work of figuring out what ?const? would mean > in the language, how it would fit with return types, method > parameters, receivers, conversions, annotations, reflection, > methodhandles, debuggers and any other number of existing features in > Java. > > If ?const? is something you really want to see added to Java, then > spend the time to work through the semantics and bring a proposal with > enough details worked out that it can be discussed and debated on its > merit. > > *From: *Markus Karg > *Date: *Thursday, December 21, 2023 at 10:52 AM > *To: *'Red IO' , Dan Heidinga > > *Cc: *'Holo The Sage Wolf' , 'Archie Cobbs' > , 'amber-dev' > *Subject: *AW: [External] : AW: Frozen objects? > > You are right, backwards compatibility without introducing const_cast > as in C++ is a problem. But that does not neither man that we MUST > introduce const_cast nor that the problem is not solvable. > > -Markus > > *Von:*Red IO [mailto:redio.development at gmail.com] > *Gesendet:* Donnerstag, 21. Dezember 2023 16:06 > *An:* Dan Heidinga > *Cc:* Markus Karg; Holo The Sage Wolf; Archie Cobbs; amber-dev > *Betreff:* Re: [External] : AW: Frozen objects? > > I think const is a pretty fine concept, the confusion in c++ primarily > comes from it's confusing syntax and having multiple meanings like > const member functions. > > A conversion from const to non const just makes no sense. You can use > every non const object like a const one but never the other way around. > > I prefer the inverted rust equivalent "mut" more as it makes the point > more clear. If you share a mutable reference you expect the recipient > to mutate your data, if you pass an immutable reference you can be > ensured the recipient won't change your data. > > It's just a contract rather or not some value can be mutated and > rather or not a method requires to mutate it's parameters. > > In java we currently are stuck with exception throwing views, > documentation and defensive copies. > > I'm not sure rather adding an internal mutability system afterwards is > possible or a good idea. Especially old libraries would require some > sort of const cast to be usable. Which would undermine the certainty > such a system provides. > > Best regards > > RedIODev > > On Wed, Dec 20, 2023, 17:58 Dan Heidinga wrote: > > C++ ?const? is a bit of a mess.? It?s not only a local property > that prevents writes to the reference; it?s also a viral property > that infects the type system.? Instead of dealing with a single > type (?X?), we now have two (?X?, ?const X?) with a 1-way > conversion from ?X -> const X? but no conversion back (Let?s not > talk about const_cast?s undefined behaviour?.).? Now methods need > to be defined to take either an X or a const X parameter and need > to flow the const through to all their callers. > > But that?s not all ? now we need to be able to mark virtual > methods to declare if the receiver is const or not.? And to mark > return types as const or not. > > There?s a pretty massive cost to the user?s mental model and to > the language as well as producing on-going compatibility problems > (is adding or removing ?const? modifiers binary compatible? Source > compatible?) for library evolution. > > Syntactic sugar to indicate ?I won?t write to this? doesn?t really > pay its way.? The costs are quite high. > > *From: *Markus Karg > *Date: *Wednesday, December 20, 2023 at 5:32 AM > *To: *'Holo The Sage Wolf' > *Cc: *Dan Heidinga , 'Archie Cobbs' > , 'amber-dev' > *Subject: *[External] : AW: Frozen objects? > > C++ ("const") does not freeze the memory region at all, and it > does not need to (and hence is quite fast at runtime as it does > not even need to check the access). The compiler simply rejects to > compile the attempt to write via read-only references. That would > be sufficient for most cases. Freezing objects is a different > idea, and only needed in side cases. So I would plea for > introducing compile-time read-only references first, as it is the > lower hanging fruit. > > -Markus > > *Von:*Holo The Sage Wolf [mailto:holo3146 at gmail.com] > *Gesendet:* Dienstag, 19. Dezember 2023 14:18 > *An:* Markus Karg > *Cc:* Dan Heidinga; Archie Cobbs; amber-dev > *Betreff:* Re: Frozen objects? > > How do you freeze a memory region without talking about freezing > objects? > > Unless your data is flat (so only value classes, primitives and > arrays, 2 of which won't benefit from freezing) the only way to > have freezing something that is enforced at compile time you must > talk about objects. > > On Tue, 19 Dec 2023, 10:04 Markus Karg, > wrote: > > I wonder why we discuss about freezing *objects* (which needs > time) but not simply freezing *references* (like `const` does > in C++)? > > -Markus > > *Von:*amber-dev [mailto:amber-dev-retn at openjdk.org > ] *Im Auftrag von *Dan Heidinga > *Gesendet:* Montag, 18. Dezember 2023 16:04 > *An:* Archie Cobbs; amber-dev > *Betreff:* Re: Frozen objects? > > Let me throw out one other concern: races.? The invariant > frozen objects want is that the application and runtime can > trust they will never be mutated again.? Unfortunately, if the > object is published across threads before it is frozen, then > that invariant is very difficult and expensive to maintain. > > If two threads, A & B, both have references to the object and > thread A freezes it, B may still be publishing writes to it > that A only observes later.? To ensure the right JMM > happens-before relationship for fields of Freezable objects, > both reads and writes would need to be more expensive > (volatile semantics?) until a thread could validate the object > it was operating on was frozen. > > Freezing is not just a free set of unexplored optimizations. > There?re also new costs associated with it across the runtime > (field read/write, profiling, etc). > > --Dan > > *From: *amber-dev on behalf of > Archie Cobbs > *Date: *Saturday, December 16, 2023 at 12:33 PM > *To: *amber-dev > *Subject: *Frozen objects? > > Caveat: I'm just trying to educate myself on what's been > discussed in the past, not actually suggest a new language > feature. I'm sure this kind of idea has been discussed before > so feel free to point me at some previous thread, etc. > > In C we have 'const' which essentially means "the memory > allocated to this thing is immutable". The nice thing about > 'const' is that it can apply to an individual variable or > field in a structure, or it can apply to an entire C structure > or C array. In effect it applies to any contiguous memory > region that can be named/identified at the language level. > > On the other hand, it's just a language fiction, i.e., it can > always be defeated at runtime by casting (except for static > constants). > > In Java we have 'final' which (in part) is like 'const' for > fields and variables, but unlike C 'final' can't be applied to > larger memory regions like entire objects or entire arrays. > > In C, 'const' can be applied "dynamically" in the sense I can > cast foo to const foo. Of course, this is only enforced at the > language level. > > Summary of differences between C 'const' and Java 'final': > > ?Granularity: > > oC: Any contiguous memory region that has a language > name/identification > > oJava: At most 64 bits at a time (*) and arrays are not included > > oAdvantage: C > > ?Enforcement: > > oC: Enforced only by the compiler (mostly) > > oJava: Enforced by the compiler and at runtime > > oAdvantage: Java > > ?Dynamic Application: > > oC: Yes > > oJava: No > > oAdvantage: C > > (*) With records and value objects we are gradually moving > towards the ability for larger things than an individual field > to be 'const'. More generally, Java has slowly been glomming > on some of the goodness from functional programming, including > making it easier to declare and work with immutable data. > > This all begs the question: why not take this idea to its > logical conclusion? And while we're at it, make the capability > fully dynamic, instead of limiting when you can 'freeze' > something construction time? > > In other words, add the ability to "freeze" an object or > array. If 'x' is frozen, whatever 'x' directly references > becomes no longer mutable. > > A rough sketch... > > Add new Freezableinterface: > > ??? public interface Freezable { > > boolean isFrozen(); > > static boolean freeze(Freezable obj);?? // returns false if > already frozen > > ??? } > > Arrays automatically implement Freezable(just like they do > Cloneable) > > What about the memory model? Ideally it would work as if > written like this: > > ??? public class Foo implements Freezable { > > private volatile frozen;??? // set to true by Freezable.freeze() > > void mutateFooContent(Runnable mutation) { > > if (this.frozen) > > ? ? throw new FrozenObjectException(); > > else > > ??mutation.run(); > > ? ? ? ? } > > ??? } > > But there could be a better trade-off of performance vs. > semantics. > > Other trade-offs... > > ?(-) All mutations to a Freezablewould require a new 'frozen' > check (* see below) > > ?(-) There would have to be a new bit allocated in the object > header > > ?(+) Eliminate zillions of JDK defensive array copies (things > like String.toCharArray()) > > ?(+) JIT optimizations for constant-folding, etc. > > ?(+) GC optimizations > > o(*) Put frozen objects into a read-only region of memory to > eliminate mutation checks > > oOptimize scanning of frozen references (since they never change) > > I'm curious how other people think this idea would or wouldn't > make sense for Java & what's been decided in the past. > > Thanks, > > -Archie > > -- > > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus at headcrashing.eu Thu Dec 21 16:35:55 2023 From: markus at headcrashing.eu (Markus Karg) Date: Thu, 21 Dec 2023 17:35:55 +0100 Subject: AW: [External] : AW: Frozen objects? In-Reply-To: References: <007f01da3251$fde2a600$f9a7f200$@eu> <023901da332f$c0e19660$42a4c320$@eu> <004201da3425$9d43c6f0$d7cb54d0$@eu> Message-ID: <007201da342b$c58385a0$508a90e0$@eu> So this discussion is as useless as Dan's request that I shall come back with all the requested answers. As apparently that is declared to be a dead end upfront, I hereby reject the idea and I look forward to see the solution provided for frozen objects instead. :-) -Markus Von: Brian Goetz [mailto:brian.goetz at oracle.com] Gesendet: Donnerstag, 21. Dezember 2023 17:27 An: Dan Heidinga; Markus Karg; 'Red IO' Cc: 'Holo The Sage Wolf'; 'Archie Cobbs'; 'amber-dev' Betreff: Re: [External] : AW: Frozen objects? Dan is 100% correct here; there's a reason we haven't pursued this, and it's not because we've never used C++. It's not even a close call; it's an obvious loser. So you think frozen objects will be any simpler? Dan never suggested frozen objects were a good idea either; someone asked a question about them, that's all. And it's disingenuous to suggest that these are the only two options, or that if you don't like A, then you must like B. (As a rule of thumb: arguments that start with "So you think / so you're saying" are almost never what that person things or is saying.) In any case, we are way outside the charter of amber-dev here. I think best to stop here. On 12/21/2023 11:08 AM, Dan Heidinga wrote: Not unsolvable but the payoff isn?t there. None of this is the hard work of figuring out what ?const? would mean in the language, how it would fit with return types, method parameters, receivers, conversions, annotations, reflection, methodhandles, debuggers and any other number of existing features in Java. If ?const? is something you really want to see added to Java, then spend the time to work through the semantics and bring a proposal with enough details worked out that it can be discussed and debated on its merit. From: Markus Karg Date: Thursday, December 21, 2023 at 10:52 AM To: 'Red IO' , Dan Heidinga Cc: 'Holo The Sage Wolf' , 'Archie Cobbs' , 'amber-dev' Subject: AW: [External] : AW: Frozen objects? You are right, backwards compatibility without introducing const_cast as in C++ is a problem. But that does not neither man that we MUST introduce const_cast nor that the problem is not solvable. -Markus Von: Red IO [mailto:redio.development at gmail.com] Gesendet: Donnerstag, 21. Dezember 2023 16:06 An: Dan Heidinga Cc: Markus Karg; Holo The Sage Wolf; Archie Cobbs; amber-dev Betreff: Re: [External] : AW: Frozen objects? I think const is a pretty fine concept, the confusion in c++ primarily comes from it's confusing syntax and having multiple meanings like const member functions. A conversion from const to non const just makes no sense. You can use every non const object like a const one but never the other way around. I prefer the inverted rust equivalent "mut" more as it makes the point more clear. If you share a mutable reference you expect the recipient to mutate your data, if you pass an immutable reference you can be ensured the recipient won't change your data. It's just a contract rather or not some value can be mutated and rather or not a method requires to mutate it's parameters. In java we currently are stuck with exception throwing views, documentation and defensive copies. I'm not sure rather adding an internal mutability system afterwards is possible or a good idea. Especially old libraries would require some sort of const cast to be usable. Which would undermine the certainty such a system provides. Best regards RedIODev On Wed, Dec 20, 2023, 17:58 Dan Heidinga wrote: C++ ?const? is a bit of a mess. It?s not only a local property that prevents writes to the reference; it?s also a viral property that infects the type system. Instead of dealing with a single type (?X?), we now have two (?X?, ?const X?) with a 1-way conversion from ?X -> const X? but no conversion back (Let?s not talk about const_cast?s undefined behaviour?.). Now methods need to be defined to take either an X or a const X parameter and need to flow the const through to all their callers. But that?s not all ? now we need to be able to mark virtual methods to declare if the receiver is const or not. And to mark return types as const or not. There?s a pretty massive cost to the user?s mental model and to the language as well as producing on-going compatibility problems (is adding or removing ?const? modifiers binary compatible? Source compatible?) for library evolution. Syntactic sugar to indicate ?I won?t write to this? doesn?t really pay its way. The costs are quite high. From: Markus Karg Date: Wednesday, December 20, 2023 at 5:32 AM To: 'Holo The Sage Wolf' Cc: Dan Heidinga , 'Archie Cobbs' , 'amber-dev' Subject: [External] : AW: Frozen objects? C++ ("const") does not freeze the memory region at all, and it does not need to (and hence is quite fast at runtime as it does not even need to check the access). The compiler simply rejects to compile the attempt to write via read-only references. That would be sufficient for most cases. Freezing objects is a different idea, and only needed in side cases. So I would plea for introducing compile-time read-only references first, as it is the lower hanging fruit. -Markus Von: Holo The Sage Wolf [mailto:holo3146 at gmail.com] Gesendet: Dienstag, 19. Dezember 2023 14:18 An: Markus Karg Cc: Dan Heidinga; Archie Cobbs; amber-dev Betreff: Re: Frozen objects? How do you freeze a memory region without talking about freezing objects? Unless your data is flat (so only value classes, primitives and arrays, 2 of which won't benefit from freezing) the only way to have freezing something that is enforced at compile time you must talk about objects. On Tue, 19 Dec 2023, 10:04 Markus Karg, wrote: I wonder why we discuss about freezing *objects* (which needs time) but not simply freezing *references* (like `const` does in C++)? -Markus Von: amber-dev [mailto: amber-dev-retn at openjdk.org] Im Auftrag von Dan Heidinga Gesendet: Montag, 18. Dezember 2023 16:04 An: Archie Cobbs; amber-dev Betreff: Re: Frozen objects? Let me throw out one other concern: races. The invariant frozen objects want is that the application and runtime can trust they will never be mutated again. Unfortunately, if the object is published across threads before it is frozen, then that invariant is very difficult and expensive to maintain. If two threads, A & B, both have references to the object and thread A freezes it, B may still be publishing writes to it that A only observes later. To ensure the right JMM happens-before relationship for fields of Freezable objects, both reads and writes would need to be more expensive (volatile semantics?) until a thread could validate the object it was operating on was frozen. Freezing is not just a free set of unexplored optimizations. There?re also new costs associated with it across the runtime (field read/write, profiling, etc). --Dan From: amber-dev on behalf of Archie Cobbs Date: Saturday, December 16, 2023 at 12:33 PM To: amber-dev Subject: Frozen objects? Caveat: I'm just trying to educate myself on what's been discussed in the past, not actually suggest a new language feature. I'm sure this kind of idea has been discussed before so feel free to point me at some previous thread, etc. In C we have 'const' which essentially means "the memory allocated to this thing is immutable". The nice thing about 'const' is that it can apply to an individual variable or field in a structure, or it can apply to an entire C structure or C array. In effect it applies to any contiguous memory region that can be named/identified at the language level. On the other hand, it's just a language fiction, i.e., it can always be defeated at runtime by casting (except for static constants). In Java we have 'final' which (in part) is like 'const' for fields and variables, but unlike C 'final' can't be applied to larger memory regions like entire objects or entire arrays. In C, 'const' can be applied "dynamically" in the sense I can cast foo to const foo. Of course, this is only enforced at the language level. Summary of differences between C 'const' and Java 'final': ? Granularity: o C: Any contiguous memory region that has a language name/identification o Java: At most 64 bits at a time (*) and arrays are not included o Advantage: C ? Enforcement: o C: Enforced only by the compiler (mostly) o Java: Enforced by the compiler and at runtime o Advantage: Java ? Dynamic Application: o C: Yes o Java: No o Advantage: C (*) With records and value objects we are gradually moving towards the ability for larger things than an individual field to be 'const'. More generally, Java has slowly been glomming on some of the goodness from functional programming, including making it easier to declare and work with immutable data. This all begs the question: why not take this idea to its logical conclusion? And while we're at it, make the capability fully dynamic, instead of limiting when you can 'freeze' something construction time? In other words, add the ability to "freeze" an object or array. If 'x' is frozen, whatever 'x' directly references becomes no longer mutable. A rough sketch... Add new Freezable interface: public interface Freezable { boolean isFrozen(); static boolean freeze(Freezable obj); // returns false if already frozen } Arrays automatically implement Freezable (just like they do Cloneable) What about the memory model? Ideally it would work as if written like this: public class Foo implements Freezable { private volatile frozen; // set to true by Freezable.freeze() void mutateFooContent(Runnable mutation) { if (this.frozen) throw new FrozenObjectException(); else mutation.run(); } } But there could be a better trade-off of performance vs. semantics. Other trade-offs... ? (-) All mutations to a Freezable would require a new 'frozen' check (* see below) ? (-) There would have to be a new bit allocated in the object header ? (+) Eliminate zillions of JDK defensive array copies (things like String.toCharArray()) ? (+) JIT optimizations for constant-folding, etc. ? (+) GC optimizations o (*) Put frozen objects into a read-only region of memory to eliminate mutation checks o Optimize scanning of frozen references (since they never change) I'm curious how other people think this idea would or wouldn't make sense for Java & what's been decided in the past. Thanks, -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.r.rose at oracle.com Thu Dec 21 23:11:22 2023 From: john.r.rose at oracle.com (John Rose) Date: Thu, 21 Dec 2023 15:11:22 -0800 Subject: [External] : AW: Frozen objects? In-Reply-To: References: <007f01da3251$fde2a600$f9a7f200$@eu> <023901da332f$c0e19660$42a4c320$@eu> Message-ID: <3F442EBD-1500-427B-B0AB-2284B8E5D14E@oracle.com> On 20 Dec 2023, at 8:58, Dan Heidinga wrote: > C++ ?const? is a bit of a mess. It?s not only a local property that prevents writes to the reference; it?s also a viral property that infects the type system. Well put Dan. Thanks for writing that point so I don?t have to. But I do have some more comments in this vein. Adding distinctions to a type system is never a cost-free move. They put a new burden on all programmers to maintain them, as well a burden on all related tools and infrastructure. A new distinction must be remarkably useful to pay for its costs. Java generics is one good example, and maybe the only example, of new distinctions successfully added to Java?s type system. Even today the VM is designed to ignore the distinctions introduced by generics. It operates on ?erased types?. C# has made interesting forays into the game of adding new type distinctions that apply uniformly across the pre-existing type system. One is adding to every T an ?expression-of-T? type for AST reification. It had interesting results. The Babylon project is developing something similar, but less intrusive, with method and lambda reflection. Another is adding an async mode at selected points (functional types) which instructs the system to compile affected code into a reactive form. This was, I think, not a complete win, because async code and regular code do not interoperate completely, so programmers must assign incompatible ?colors? to different regions of code and API points using within them. The ?coloring problem? of assigning reactive and normal ?coloration? to code has always struck me as a move similar to the C++ decision to ?color? methods as const or non-const (and often both ways repeatedly). With Java, we tend to look skeptically on any attempt to ?color? code. In particular, we invested in Project Loom precisely to avoid having to ?color? code as reactive vs. non-reactive. Immutable data are interesting and highly useful, but modeling immutability in the type system would probably be a mistake. Note that the Java Collections API refuses to duplicate itself into ?mutable? and ?immutable? colors. Mutability in Java is a mostly dynamic property. Because of all of the above, I do think frozen arrays may be in our future, but not a first-class type that looks like ?const T[]? or ?final T[]?. At most we might have a declaration format like ?__Frozen int[] FA = {1,2,3}?, but it would be equivalent to something like ?int[] FA = Arrays.freeze(new int[]{1,2,3})?, maybe with a subtle compilation format that avoids the temporary. (I will not launch a syntax bikeshed discussion by trying for a ?real looking? syntax. Just ?__Frozen? is sufficient to get my point across. I will ignore comments on the cosmetics of my example keyword, since it is manifestly non-cosmetic.) IN THEORY, frozen instances are possible in the VM, with some header hacking and some extra barriers on affected putfield instructions. IN PRACTICE, we would probably need to distinguish in user code which methods are intended for all instances, and which are for mutable ones. There are not good examples, IMO, for making this distinction in some other programming language. C++ const is not a good example for Java to follow because it is too viral, invasive. So, given the above, the reserved keyword ?const? is almost certainly not going to be adopted into Java along the lines of C++ const. Nor is it likely to be adopted as a new type distinction between immutable and mutable data structures (which C++ const fails to fully express). I will close with a speculation about what ?const? MIGHT be used for in the future, in Java. (This is just me wondering. I agree with the consensus to leave the dust on it undisturbed indefinitely in the Java language specification.) Java has a special place in the world of languages because it has a reasonably strong static type system, and yet execution does not depend greatly on the type system. In the VM, a Java program acts in many ways like a Smalltalk or Lisp program. At the user level this surfaces with the usefulness of the Object type and interface types. Part of the interplay between static and dynamic typing is managed in Java by making permanent decisions dynamically. You can load a class dynamically based on runtime computations, but once loaded it is loaded forever. For that reason, there is a special place in Java for dynamic computations at the time of symbolic resolution. The invokedynamic and CONSTANT_dynamic features in the VM live in this special place: You spin up something like a lambda class on demand, and then use it moving forward. Spinning up program resources (code and data) is inherent in the VM but not in the language user model, except in a few places, such as what happens the first time a static field or method is accessed, in a given class. (Static initializers are run then, at most once.) PERHAPS the ?const? keyword might someday play a role in maintaining the distinction between computations which take place at most once, to spin up program resources that will be permanently available thereafter. It?s more likely that if we need a keyword it will be some other more ?transparent? word. It?s not nearly time to design any of this, however, within the language. First we have to build (in the existing language) Java APIs and data structures which make such code patterns clear and easy to work with. Today, the work on ComputedConstant is where this action is. Also, program resources which are derived by code transformations are an area we are experimenting with in Project Babylon. There are also experiments for offline derivation in Leyden. These experiments all need to make much more progress before we jump into language design. This hesitation to modify Java is precisely because Java is already an exquisitely effective notation for formalizing and modeling and thinking about all kinds of complicated designs, including immutability and the spin-up of program resources. So we can prototype new ideas almost forever in Java without touching the language. When we do make changes, we can be reasonably sure, through long experience, that the changes will be profitable. ? John From markus at headcrashing.eu Fri Dec 22 14:36:14 2023 From: markus at headcrashing.eu (Markus Karg) Date: Fri, 22 Dec 2023 15:36:14 +0100 Subject: AW: [External] : AW: Frozen objects? In-Reply-To: <3F442EBD-1500-427B-B0AB-2284B8E5D14E@oracle.com> References: <007f01da3251$fde2a600$f9a7f200$@eu> <023901da332f$c0e19660$42a4c320$@eu> <3F442EBD-1500-427B-B0AB-2284B8E5D14E@oracle.com> Message-ID: <019901da34e4$377a4a00$a66ede00$@eu> John, thanks a lot for sharing, this is the definitive answer I originally hoped for when asking about freezed references vs freezed objects! :-) You guys all are doing a great job and I do not want to get misunderstood: I didn't want to upset or criticize anybody, I just had a bad idea that apparently was discussed and ruled out long time ago by your team. In the end, I already rejected by proposal recently, and really, I am looking forward for the solution your team will some day come up with for read-only arrays. Merry Christmas to all of you! -Markus -----Urspr?ngliche Nachricht----- Von: John Rose [mailto:john.r.rose at oracle.com] Gesendet: Freitag, 22. Dezember 2023 00:11 An: Dan Heidinga Cc: Markus Karg; Holo The Sage Wolf; Archie Cobbs; amber-dev Betreff: Re: [External] : AW: Frozen objects? On 20 Dec 2023, at 8:58, Dan Heidinga wrote: > C++ ?const? is a bit of a mess. It?s not only a local property that prevents writes to the reference; it?s also a viral property that infects the type system. Well put Dan. Thanks for writing that point so I don?t have to. But I do have some more comments in this vein. Adding distinctions to a type system is never a cost-free move. They put a new burden on all programmers to maintain them, as well a burden on all related tools and infrastructure. A new distinction must be remarkably useful to pay for its costs. Java generics is one good example, and maybe the only example, of new distinctions successfully added to Java?s type system. Even today the VM is designed to ignore the distinctions introduced by generics. It operates on ?erased types?. C# has made interesting forays into the game of adding new type distinctions that apply uniformly across the pre-existing type system. One is adding to every T an ?expression-of-T? type for AST reification. It had interesting results. The Babylon project is developing something similar, but less intrusive, with method and lambda reflection. Another is adding an async mode at selected points (functional types) which instructs the system to compile affected code into a reactive form. This was, I think, not a complete win, because async code and regular code do not interoperate completely, so programmers must assign incompatible ?colors? to different regions of code and API points using within them. The ?coloring problem? of assigning reactive and normal ?coloration? to code has always struck me as a move similar to the C++ decision to ?color? methods as const or non-const (and often both ways repeatedly). With Java, we tend to look skeptically on any attempt to ?color? code. In particular, we invested in Project Loom precisely to avoid having to ?color? code as reactive vs. non-reactive. Immutable data are interesting and highly useful, but modeling immutability in the type system would probably be a mistake. Note that the Java Collections API refuses to duplicate itself into ?mutable? and ?immutable? colors. Mutability in Java is a mostly dynamic property. Because of all of the above, I do think frozen arrays may be in our future, but not a first-class type that looks like ?const T[]? or ?final T[]?. At most we might have a declaration format like ?__Frozen int[] FA = {1,2,3}?, but it would be equivalent to something like ?int[] FA = Arrays.freeze(new int[]{1,2,3})?, maybe with a subtle compilation format that avoids the temporary. (I will not launch a syntax bikeshed discussion by trying for a ?real looking? syntax. Just ?__Frozen? is sufficient to get my point across. I will ignore comments on the cosmetics of my example keyword, since it is manifestly non-cosmetic.) IN THEORY, frozen instances are possible in the VM, with some header hacking and some extra barriers on affected putfield instructions. IN PRACTICE, we would probably need to distinguish in user code which methods are intended for all instances, and which are for mutable ones. There are not good examples, IMO, for making this distinction in some other programming language. C++ const is not a good example for Java to follow because it is too viral, invasive. So, given the above, the reserved keyword ?const? is almost certainly not going to be adopted into Java along the lines of C++ const. Nor is it likely to be adopted as a new type distinction between immutable and mutable data structures (which C++ const fails to fully express). I will close with a speculation about what ?const? MIGHT be used for in the future, in Java. (This is just me wondering. I agree with the consensus to leave the dust on it undisturbed indefinitely in the Java language specification.) Java has a special place in the world of languages because it has a reasonably strong static type system, and yet execution does not depend greatly on the type system. In the VM, a Java program acts in many ways like a Smalltalk or Lisp program. At the user level this surfaces with the usefulness of the Object type and interface types. Part of the interplay between static and dynamic typing is managed in Java by making permanent decisions dynamically. You can load a class dynamically based on runtime computations, but once loaded it is loaded forever. For that reason, there is a special place in Java for dynamic computations at the time of symbolic resolution. The invokedynamic and CONSTANT_dynamic features in the VM live in this special place: You spin up something like a lambda class on demand, and then use it moving forward. Spinning up program resources (code and data) is inherent in the VM but not in the language user model, except in a few places, such as what happens the first time a static field or method is accessed, in a given class. (Static initializers are run then, at most once.) PERHAPS the ?const? keyword might someday play a role in maintaining the distinction between computations which take place at most once, to spin up program resources that will be permanently available thereafter. It?s more likely that if we need a keyword it will be some other more ?transparent? word. It?s not nearly time to design any of this, however, within the language. First we have to build (in the existing language) Java APIs and data structures which make such code patterns clear and easy to work with. Today, the work on ComputedConstant is where this action is. Also, program resources which are derived by code transformations are an area we are experimenting with in Project Babylon. There are also experiments for offline derivation in Leyden. These experiments all need to make much more progress before we jump into language design. This hesitation to modify Java is precisely because Java is already an exquisitely effective notation for formalizing and modeling and thinking about all kinds of complicated designs, including immutability and the spin-up of program resources. So we can prototype new ideas almost forever in Java without touching the language. When we do make changes, we can be reasonably sure, through long experience, that the changes will be profitable. ? John From vab2048 at gmail.com Thu Dec 28 14:59:10 2023 From: vab2048 at gmail.com (Vikram Bakshi) Date: Thu, 28 Dec 2023 14:59:10 +0000 Subject: Allowing inheritance with Records? Message-ID: Hello, Is the decision to not allow inheritance for records set in stone? Or will this be opened up and explored in the future? One of the goals of records (from Brian Goetz Devoxx talk) is to "model data as data", and allowing inheritance would offer a powerful way of modelling data. Right now we have to write interfaces for fields which are shared/static methods for shared behaviour/copying and pasting - which I do not think is ideal. Regards, Vikram -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Thu Dec 28 16:38:40 2023 From: brian.goetz at oracle.com (Brian Goetz) Date: Thu, 28 Dec 2023 11:38:40 -0500 Subject: Allowing inheritance with Records? In-Reply-To: References: Message-ID: <32c477bc-3aeb-4be3-baf2-c67020b43f5a@oracle.com> It depends what you mean by "inheritance". Will records ever be able to extend an arbitrary class like SocketInputStream?? That's a pretty clear "definitely not."? Is there room for a more restricted category of abstract classes that records could extend without giving up their semantic benefits and safety guarantees?? Quite possibly, including perhaps the notion of "abstract record", which permits layered modeling but maintains the safety guarantees of records? We explored abstract records during the design of records, and concluded that while they were a possibility, there are a number of details to work out, some of which are a little uncomfortable. (Separately, in Valhalla, we have explored what is required for an abstract class to be extendible by a value class, which might also inform an answer.) So there is room to explore this further, but it is not currently a topic of active investigation. On 12/28/2023 9:59 AM, Vikram Bakshi wrote: > Hello, > > Is the decision to not allow inheritance for records set in stone? Or > will this be opened up and explored in the future? > > One of the goals of records (from Brian Goetz Devoxx talk) is to > "model data as data", and allowing inheritance would offer a powerful > way of modelling data. > > Right now we have to write interfaces for fields which are > shared/static methods for shared behaviour/copying and pasting - which > I do not think is ideal. > > Regards, > Vikram > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Fri Dec 29 14:05:45 2023 From: forax at univ-mlv.fr (Remi Forax) Date: Fri, 29 Dec 2023 15:05:45 +0100 (CET) Subject: Allowing inheritance with Records? In-Reply-To: References: Message-ID: <1397377203.91697872.1703858745603.JavaMail.zimbra@univ-eiffel.fr> > From: "Vikram Bakshi" > To: amber-dev at openjdk.java.net > Sent: Thursday, December 28, 2023 3:59:10 PM > Subject: Allowing inheritance with Records? > Hello, > Is the decision to not allow inheritance for records set in stone? Or will this > be opened up and explored in the future? > One of the goals of records (from Brian Goetz Devoxx talk) is to "model data as > data", and allowing inheritance would offer a powerful way of modelling data. I almost spill my tea :) Inheritance is about sharing behaviors, so the data behaviors should not change too much. But business requirement, business data/computation changes are frequent, so using inheritance is a kind of an an anti-pattern for data. It's far far easier to use records, empty interfaces and pattern matching (switch on types) when you want to define data and their business rules than tryig to use inheritance for a case it will not work well. > Right now we have to write interfaces for fields which are shared/static methods > for shared behaviour/copying and pasting - which I do not think is ideal. > Regards, > Vikram regards, R?mi -------------- next part -------------- An HTML attachment was scrubbed... URL: