From jordan at jordanzimmerman.com Mon Aug 1 11:58:24 2022 From: jordan at jordanzimmerman.com (Jordan Zimmerman) Date: Mon, 1 Aug 2022 13:58:24 +0200 Subject: Possible Pattern Matching for switch (Third Preview) bug Message-ID: There doesn't seem to be a way to have a switch case pattern for multiple related patterns. Given that it works in an instanceof pattern I would think it might work in a switch pattern. But, maybe not. Anyway here's what I found. Given: public interface MyFace {} public record MyEye() implements MyFace {} public record MyNose() implements MyFace {} public void examine(Object face) { switch (face) { case MyEye eye, MyNose nose -> System.out.println("part of my face"); default -> System.out.println("Not part of my face"); } } This produces: "illegal fall-through to a pattern". However, this works with an instanceof pattern. E.g. public void examine(Object face) { if ((face instanceof MyEye eye) || (face instanceof MyNose nose)) { System.out.println("part of my face"); } else { System.out.println("Not part of my face"); } } Of course, the instanceof test is not very useful as the bound variables "eye" or "nose" are only scoped to the immediate test (and not in the if block). So, this may not be a bug? Anyway, I thought I'd mention it. For the switch it would be useful to have common behavior for several related patterns without having to use a method to do it. The bound variables would be ignored (maybe via an upcoming wonderbar "_"). Cheers. -Jordan -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Mon Aug 1 12:12:26 2022 From: forax at univ-mlv.fr (Remi Forax) Date: Mon, 1 Aug 2022 14:12:26 +0200 (CEST) Subject: Possible Pattern Matching for switch (Third Preview) bug In-Reply-To: References: Message-ID: <1620576847.17578230.1659355946194.JavaMail.zimbra@u-pem.fr> > From: "Jordan Zimmerman" > To: "amber-dev" > Sent: Monday, August 1, 2022 1:58:24 PM > Subject: Possible Pattern Matching for switch (Third Preview) bug > There doesn't seem to be a way to have a switch case pattern for multiple > related patterns. Given that it works in an instanceof pattern I would think it > might work in a switch pattern. But, maybe not. Anyway here's what I found. > Given: >> public interface MyFace {} >> public record MyEye() implements MyFace {} >> public record MyNose() implements MyFace {} >> public void examine(Object face) { >> switch (face) { >> case MyEye eye, MyNose nose -> System.out.println("part of my face"); >> default -> System.out.println("Not part of my face"); >> } >> } > This produces: "illegal fall-through to a pattern". > However, this works with an instanceof pattern. E.g. >> public void examine(Object face) { >> if ((face instanceof MyEye eye) || (face instanceof MyNose nose)) { >> System.out.println("part of my face"); >> } >> else { >> System.out.println("Not part of my face"); >> } >> } > Of course, the instanceof test is not very useful as the bound variables "eye" > or "nose" are only scoped to the immediate test (and not in the if block). So, > this may not be a bug? Anyway, I thought I'd mention it. For the switch it > would be useful to have common behavior for several related patterns without > having to use a method to do it. The bound variables would be ignored (maybe > via an upcoming wonderbar "_"). yes, that's one possible future, introduce '_' so one can write public void examine(Object o) { switch (o) { case MyEye _, MyNose _ -> System.out.println("part of my face"); default -> System.out.println("Not part of my face"); } } But we have to answer to the question: where we can put '_' ? as method parameter, lambda parameter, catch parameter, local variable, etc Perhaps we do not need '_', and it's "better" to allow the type pattern to be declared without a binding like the record pattern has an optional binding for the whole pattern ? > Cheers. > -Jordan regards, R?mi -------------- next part -------------- An HTML attachment was scrubbed... URL: From thihup at gmail.com Mon Aug 1 12:28:12 2022 From: thihup at gmail.com (Thiago Henrique Hupner) Date: Mon, 1 Aug 2022 09:28:12 -0300 Subject: Multi-catch and exhaustiveness check Message-ID: Hello all! I've played around with the exhaustiveness check and I'd like to discuss whether the following code should compile or it is right not to compile. import java.util.*; import java.io.*; sealed abstract class MyAbstractException extends RuntimeException { final static class MyException extends MyAbstractException {} } final class MyUncheckedException extends UncheckedIOException { public MyUncheckedException() { super(null); } } public class SealedException { public static void main(String[] args) { try { throw new MyUncheckedException(); } catch (MyAbstractException | MyUncheckedException e) { switch (e) { case MyAbstractException.MyException a -> {} case MyUncheckedException b -> {} } } } } As MyUncheckedExceptionh is final, and MyAbstractException.MyException is the only implementation available for MyAbstractException, I guess it is exhaustive, but the compiler disagrees. SealedException.java:23: error: the switch statement does not cover all possible input values switch (e) { ^ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Mon Aug 1 14:27:53 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 1 Aug 2022 10:27:53 -0400 Subject: Possible Pattern Matching for switch (Third Preview) bug In-Reply-To: References: Message-ID: <8fa06ef2-7675-c6ea-3702-c3273cb0c09b@oracle.com> The switch behavior is as intended.? The instanceof behavior is slightly surprising, but harmless -- basically like a dead local variable.? But in both cases, a dead binding is unlikely to mean what you intend. As you surmise, the restriction for switch has more to do with bindings than with patterns; the case label is unable (or unwilling) to define bindings that cannot be used in the case block.? Currently we express this restriction with the grammar (one pattern per case), but we may relax this in the future and instead enforce the restriction based on DA of bindings, in which case you might be able to do something like: ??? case Foo _, Bar _: A related possibility is that it may be possible to _merge_ bindings: ??? case Box(String x), Bag(String x): System.out.println(x); Here, we would match one pattern or the other, and either way, one of them would bind `String x`.? We discussed this for a while but deferred it for later.? The main challenge here is: "where's the declaration of x?"? This may be confusing to both humans and tools, to have two declarations for a common use. On 8/1/2022 7:58 AM, Jordan Zimmerman wrote: > There doesn't seem to be a way to have a switch case pattern for > multiple related patterns. Given that it works in an instanceof > pattern I would think it might work in a switch pattern. But, maybe > not. Anyway here's what I found. > > Given: > > public interface MyFace {} > > public record MyEye() implements MyFace {} > public record MyNose() implements MyFace {} > > public void examine(Object face) { > switch (face) { > case MyEye eye, MyNose nose -> System.out.println("part of my face"); > default -> System.out.println("Not part of my face"); > } > } > > This produces: "illegal fall-through to a pattern". > > However, this works with an instanceof pattern. E.g. > > public void examine(Object face) { > if ((face instanceof MyEye eye) || (face instanceof MyNose nose)) { > System.out.println("part of my face"); > } > else { > System.out.println("Not part of my face"); > } > } > > > Of course, the instanceof test is not very useful as the bound > variables "eye" or "nose" are only scoped to the immediate test (and > not in the if block). So, this may not be a bug? Anyway, I thought I'd > mention it. For the switch it would be useful to have common behavior > for several related patterns without having to use a method to do it. > The bound variables would be ignored (maybe via an upcoming wonderbar > "_"). > > Cheers. > > -Jordan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Mon Aug 1 14:38:26 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 1 Aug 2022 10:38:26 -0400 Subject: Multi-catch and exhaustiveness check In-Reply-To: References: Message-ID: First question: what is the type of `e` in your catch block? You might think it is the union type MyAbstractException | MyUncheckedException, but in fact it is the least upper bound (LUB) of the two, which turns out to be RuntimeException, as per JLS 14.20: > The declared type of an exception parameter that denotes its type as a > union with > alternatives D1 | D2 | ... | Dn is lub(D1, D2, ..., Dn). So your switch is a switch on a variable of type RuntimeException, and the switch is obviously not exhaustive. So the compiler is correct. You might ask: why would we define the type of the catch formal like this?? Well, you have to go back to the context in which multi-catch was added.? This was the Project Coin days, where the scope was very limited -- including no type system changes -- and the use of LUB in this way was a trade-off that made it mostly acceptable.? The alternative would have been adding full-blown union types, which would not have been a small change (see this PhD thesis which outlines what would have been involved: https://scholarship.rice.edu/handle/1911/103594). On 8/1/2022 8:28 AM, Thiago Henrique Hupner wrote: > Hello all! > > I've played around with the exhaustiveness check and I'd like to > discuss whether the following code should compile or it is right not > to compile. > > import java.util.*; > import java.io.*; > > sealed abstract class MyAbstractException extends RuntimeException { > ? ? final static class MyException extends MyAbstractException {} > } > > final class MyUncheckedException extends UncheckedIOException { > ? ? public MyUncheckedException() { > ? ? ? ? super(null); > ? ? } > } > > public class SealedException { > ? ? public static void main(String[] args) { > ? ? ? ? try { > ? ? ? ? ? ? throw new MyUncheckedException(); > ? ? ? ? } catch (MyAbstractException | MyUncheckedException e) { > ? ? ? ? ? ? switch (e) { > ? ? ? ? ? ? ? ? case MyAbstractException.MyException a -> {} > ? ? ? ? ? ? ? ? case MyUncheckedException b -> {} > ? ? ? ? ? ? } > ? ? ? ? } > ? ? } > } > > As MyUncheckedExceptionh is final, and MyAbstractException.MyException > is the only implementation available for MyAbstractException, I guess > it is exhaustive, but the compiler disagrees. > > SealedException.java:23: error: the switch statement does not cover > all possible input values > ? ? ? ? ? ? switch (e) { > ? ? ? ? ? ? ^ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thihup at gmail.com Mon Aug 1 14:48:55 2022 From: thihup at gmail.com (Thiago Henrique Hupner) Date: Mon, 1 Aug 2022 11:48:55 -0300 Subject: Multi-catch and exhaustiveness check In-Reply-To: References: Message-ID: That makes sense. Thank you. Em seg., 1 de ago. de 2022 ?s 11:38, Brian Goetz escreveu: > First question: what is the type of `e` in your catch block? > > You might think it is the union type MyAbstractException | > MyUncheckedException, but in fact it is the least upper bound (LUB) of the > two, which turns out to be RuntimeException, as per JLS 14.20: > > The declared type of an exception parameter that denotes its type as a > union with > alternatives D1 | D2 | ... | Dn is lub(D1, D2, ..., Dn). > > > So your switch is a switch on a variable of type RuntimeException, and the > switch is obviously not exhaustive. So the compiler is correct. > > You might ask: why would we define the type of the catch formal like > this? Well, you have to go back to the context in which multi-catch was > added. This was the Project Coin days, where the scope was very limited -- > including no type system changes -- and the use of LUB in this way was a > trade-off that made it mostly acceptable. The alternative would have been > adding full-blown union types, which would not have been a small change > (see this PhD thesis which outlines what would have been involved: > https://scholarship.rice.edu/handle/1911/103594). > > > > > > > > > > On 8/1/2022 8:28 AM, Thiago Henrique Hupner wrote: > > Hello all! > > I've played around with the exhaustiveness check and I'd like to > discuss whether the following code should compile or it is right not to > compile. > > import java.util.*; > import java.io.*; > > sealed abstract class MyAbstractException extends RuntimeException { > final static class MyException extends MyAbstractException {} > } > > final class MyUncheckedException extends UncheckedIOException { > public MyUncheckedException() { > super(null); > } > } > > public class SealedException { > public static void main(String[] args) { > try { > throw new MyUncheckedException(); > } catch (MyAbstractException | MyUncheckedException e) { > switch (e) { > case MyAbstractException.MyException a -> {} > case MyUncheckedException b -> {} > } > } > } > } > > As MyUncheckedExceptionh is final, and MyAbstractException.MyException is > the only implementation available for MyAbstractException, I guess it is > exhaustive, but the compiler disagrees. > > SealedException.java:23: error: the switch statement does not cover all > possible input values > switch (e) { > ^ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajcave at google.com Tue Aug 9 01:09:56 2022 From: ajcave at google.com (Andrew Cave) Date: Mon, 8 Aug 2022 19:09:56 -0600 Subject: Getters vs. public final fields in records Message-ID: What is the motivation for defining fields as private and generating getters in records, as opposed to public fields? My reason for preferring public fields: Getters for immutable fields interact poorly with static analysis. e.g. null safety ? it is not obvious that the following is null-safe: if (r.foo() != null) r.foo().bar(); While the following can be statically deduced as null-safe when foo is a final field: if (r.foo != null) r.foo.bar(); I?m sure public final fields open more opportunities for compiler optimization as well. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrjarviscraft at gmail.com Tue Aug 9 01:16:59 2022 From: mrjarviscraft at gmail.com (Petr Portnov) Date: Tue, 9 Aug 2022 04:16:59 +0300 Subject: Getters vs. public final fields in records In-Reply-To: References: Message-ID: > I?m sure public final fields open more opportunities for compiler optimization as well. AFAIK, the fields of the records are already treated specially[1] (i.e. JVM treats them as truly final) thus this is somewhat-transitively applied to trivial getters. [1]: https://github.com/openjdk/jdk/blob/master/src/hotspot/share/ci/ciField.cpp#L240 ??, 9 ???. 2022 ?. ? 04:10, Andrew Cave : > What is the motivation for defining fields as private and generating > getters in records, as opposed to public fields? > > My reason for preferring public fields: Getters for immutable fields > interact poorly with static analysis. e.g. null safety ? it is not obvious > that the following is null-safe: > > if (r.foo() != null) r.foo().bar(); > > While the following can be statically deduced as null-safe when foo is a > final field: > > if (r.foo != null) r.foo.bar(); > > I?m sure public final fields open more opportunities for compiler > optimization as well. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kamil at sevecek.net Tue Aug 9 09:06:06 2022 From: kamil at sevecek.net (=?UTF-8?B?S2FtaWwgxaBldmXEjWVr?=) Date: Tue, 9 Aug 2022 11:06:06 +0200 Subject: Proposal: `finally` block exception suppression In-Reply-To: <796599f67b6040f9b242d83c198b7bef@vodafonemail.de> References: <7902aeb4293c41a5a9695f0ba723ba25@vodafonemail.de> <796599f67b6040f9b242d83c198b7bef@vodafonemail.de> Message-ID: Personally, I think finally should have the suggested semantics with suppressedException. On the other hand, I understand that this would change the behavior of an existing language construct, which has no previous precedent. I would argue for a similar path to switch/case when avoiding fall through. Could you not come up with an alternative syntax that is as intiutive as the original try ... finally, but has new semantics? Kind regards Kamil Sevecek On Sun, 17 Jul 2022 at 14:11, < some-java-user-99206970363698485155 at vodafonemail.de> wrote: > > Besides the compatibility concerns of changing the operational semantics > > of code that has been written for year or even decades > > There might be a risk of compatibility issues, but I think it is rather > low. As outlined in my proposal > it would mostly affect applications which inspect the suppressed > exceptions in some way. > I can definitely understand your concerns regarding compatibility, but I > am also a bit afraid that > this whole proposal is rejected due to theoretical issues which might > never occur in practice. > > > should judge how often suppressedExceptions are looked at today. JDK 7 > with try-with-resources > > shipped in July 2011 so there has been ample time for developers to take > advantage of the > > suppressed exceptions information > > That is probably difficult to measure, but this would also then turn this > whole discussion into > a discussion about whether suppressed exceptions as a whole are useful. > > I have done a quick search and found the following blog posts / websites > describing this issue. > Some have been written before the introduction of try-with-resources, but > as outlined in the > proposal, it is not possible to use try-with-resources in all situations. > - https://stackoverflow.com/q/25751709 > - https://stackoverflow.com/q/3779285 > - https://stackoverflow.com/q/59360652 > - > https://www.linuxtopia.org/online_books/programming_books/thinking_in_java/TIJ311_014.htm > - https://accu.org/journals/overload/12/62/barrettpowell_236/ > - https://errorprone.info/bugpattern/Finally > - > https://wiki.sei.cmu.edu/confluence/display/java/ERR05-J.+Do+not+let+checked+exceptions+escape+from+a+finally+block > > In addition to these there are the OpenJDK bug reports I mentioned in the > proposal: > - JDK-4988583: which is the same as this proposal > - JDK-7172206: bug which is caused by this flaw and still exists today (if > I see that correctly) > > One could now argue that this issue has been so widely discussed that > developers > should by now all be aware of this issue and avoid it. But I think it is > quite the opposite: > The current behavior is flawed and developers (who might even be aware of > this issue) > keep running into this issue and therefore try to warn each other about > it. And any > solutions or workarounds to this are pretty verbose and quite error-prone. > > Of course it would have been ideal if suppressed exceptions existed from > the beginning > and the `finally` block behaved like try-with-resources statements with > regards to exception > suppression. But I think with the current situation it would at least be > good to improve it in > the best way possible. > > If you and the other OpenJDK members think that this proposal is > definitely not going to be > included (or also independently from this decision), what do you think > about reviving JDK-5108147? > This would at least make manual exception suppression in `finally` blocks > easier: > ``` > try { > someAction(); > } > finally (Throwable t) { > try { > cleanUp(); > } > catch (Throwable t2) { > if (t == null) { > throw t2; > } else { > t.addSuppressed(t2); > } > } > } > ``` > > Kind regards > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Tue Aug 9 13:13:24 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Tue, 9 Aug 2022 09:13:24 -0400 Subject: Getters vs. public final fields in records In-Reply-To: References: Message-ID: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> > What is the motivation for defining fields as private and generating > getters in records, as opposed to public fields? First, I think you have the question backwards: it would be using public fields that would require a justification. We did consider this briefly during the design process, and realized this would be a terrible idea. In any case, the answer is simple: not all objects are immutable. If a record component were an array, or an ArrayList, or any other object with mutable state, having public fields would make it impractical to use these types in records in many cases, because we'd be unable to expose their state without also exposing their mutability. > My reason for preferring public fields: Getters for immutable fields > interact poorly with static analysis. e.g. null safety ? it is not > obvious that the following is null-safe: > > if (r.foo() != null) r.foo().bar(); > > While the following can be statically deduced as null-safe when foo is > a final field: > > if (r.foo != null) r.foo.bar(); I sympathize; null safety is difficult to verify.? But that's no reason to undermine another feature just to make an already-bad problem a few percent better. > I?m sure public final fields open more opportunities for compiler > optimization as well. Don't be so sure!? Getters are almost always inlined.? (Inlining is usually a time-space tradeoff, but for methods as small as getters, it is a win on both time and space, so will almost always be taken.) From amaembo at gmail.com Tue Aug 9 20:48:40 2022 From: amaembo at gmail.com (Tagir Valeev) Date: Tue, 9 Aug 2022 22:48:40 +0200 Subject: Getters vs. public final fields in records In-Reply-To: References: Message-ID: Hello! ??, 9 ???. 2022 ?., 03:10 Andrew Cave : > What is the motivation for defining fields as private and generating > getters in records, as opposed to public fields? > > My reason for preferring public fields: Getters for immutable fields > interact poorly with static analysis. e.g. null safety ? it is not obvious > that the following is null-safe: > > if (r.foo() != null) r.foo().bar(); > As a developer of IntelliJ IDEA static analyzer, I can assure you that supporting this case for records (when accessors are not overridden) was a pretty trivial improvement and took like a couple of hours to implement and test it. IDEA supports record accessors as well as plain fields. So I wouldn't buy the static analysis argument. With best regards, Tagir Valeev > While the following can be statically deduced as null-safe when foo is a > final field: > > if (r.foo != null) r.foo.bar(); > > I?m sure public final fields open more opportunities for compiler > optimization as well. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajcave at google.com Tue Aug 9 23:07:35 2022 From: ajcave at google.com (Andrew Cave) Date: Tue, 9 Aug 2022 17:07:35 -0600 Subject: Getters vs. public final fields in records In-Reply-To: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> References: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> Message-ID: > First, I think you have the question backwards: it would be using public > fields that would require a justification. This is only true from the perspective of what is currently common practice in the Java community. But common practice in Java has changed a lot in the last few decades, often due to what becomes ergonomic with new project-amber-like features :) If we look at what record-like things look like in other languages and settings (C structs, ML records, Pascal records) accessors are the strange choice that needs justification. In any case, the answer is simple: not all objects are immutable. If a > record component were an array, or an ArrayList, or any other object > with mutable state, having public fields would make it impractical to > use these types in records in many cases, because we'd be unable to > expose their state without also exposing their mutability. I think you are suggesting techniques like explicitly defining accessors for mutable record components that make defensive copies? Are we really comfortable calling records ?a simple aggregation of values? if I have to read the documentation or implementation of an accessor just to understand what the following code does: someRecord.someArrayComponent()[0] = 42 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajcave at google.com Tue Aug 9 23:16:13 2022 From: ajcave at google.com (Andrew Cave) Date: Tue, 9 Aug 2022 17:16:13 -0600 Subject: Getters vs. public final fields in records In-Reply-To: References: Message-ID: > > As a developer of IntelliJ IDEA static analyzer, I can assure you that > supporting this case for records (when accessors are not overridden) was a > pretty trivial improvement and took like a couple of hours to implement and > test it. > Unfortunately, static analysis is not just something that your tool chain performs, it?s also something that human beings reading code perform, and for human beings I think the cost of double-checking that the accessor was not overridden to do something unexpected is more expensive than it is for IDEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ice1000kotlin at foxmail.com Tue Aug 9 23:44:37 2022 From: ice1000kotlin at foxmail.com (=?utf-8?B?VGVzbGEgWmhhbmc=?=) Date: Tue, 9 Aug 2022 19:44:37 -0400 Subject: Improve switch expression for lambda Message-ID: Hi! I wanted to write the following one-line lambda Consumer From jordan at jordanzimmerman.com Wed Aug 10 10:04:23 2022 From: jordan at jordanzimmerman.com (Jordan Zimmerman) Date: Wed, 10 Aug 2022 11:04:23 +0100 Subject: Performance of Pattern Matching for switch (Third Preview) Message-ID: Hi Folks, I've been experimenting with Pattern Matching for switch (Third Preview). I noticed that the performance of these enhanced switches is far worse than manual matching. Is this due to this only being a preview and optimizations have yet to be done? Anyway, I thought I'd mention what I found as an FYI. Here's the jmh benchmark I used: https://gist.github.com/Randgalt/a68ceee62cd8127431cbe6e7afbfdf44 Here are the results: Benchmark Mode Cnt Score Error Units TestEnhancedSwitch.testEnhancedSwitch thrpt 5 30789.482 ? 17667.365 ops/s TestEnhancedSwitch.testManualSwitch thrpt 5 44651.612 ? 5135.641 ops/s Cheers. -Jordan -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Wed Aug 10 17:32:59 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Wed, 10 Aug 2022 13:32:59 -0400 Subject: Getters vs. public final fields in records In-Reply-To: References: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> Message-ID: <3b1e0165-4bf7-e20e-c722-45f3445b7f46@oracle.com> So, I want to make sure this conversation stays on track.? You asked "why does it work this way", and you got an answer.? And its OK to ask follow-up questions for clarification.? But we need to steer away from the (irresistibly tempting, I know) track of "let's redesign the feature", and we're in danger of veering into that. These issues were well considered during the design already, and these questions are not new.? As mentioned, we did explicitly consider public fields, and concluded that would be a bad idea.? (I know this may sound unfriendly, but just consider how this scales. There are 10M Java developers, who may each independently have their own ideas about how a feature should be designed, and want to "debate" it, sequentially and independently and possibly inconsistently, with the language designers.) Now, to your question. > If we look at what record-like things look like in other languages and > settings (C structs, ML records, Pascal records) accessors are the > strange choice that needs justification. > > In any case, the answer is simple: not all objects are immutable. > If a > record component were an array, or an ArrayList, or any other object > with mutable state, having public fields would make it impractical to > use these types in records in many cases, because we'd be unable to > expose their state without also exposing their mutability. > > > I think you are suggesting techniques like explicitly defining > accessors for mutable record components that make defensive copies? > Are we really comfortable calling records ?a simple aggregation of > values? if I have to read the documentation or implementation of an > accessor just to understand what the following code does: > This is exactly as we intended it to work.? For records whose components are values (e.g., `record Point(int x, int y) {}`), the default implementation of constructor and accessor do exactly the right thing, and no one has to write any code.? But when mutability rears its head, the "right thing" is less obvious, and making a one-size-doesn't-fit-all decision will make some users unhappy. As it turns out, the mathematical construct from Domain Theory known as "embedding-projection pairs" (which is a formalization of approximation) offer us a useful way to talk about the right thing. The key invariant that records must adhere to (specified in the refined contract of `Record::equals`) is that unpacking a record components, and repacking in a new record, should yield an "equals" record: ??? r.equals(new R(r.c0(), r.c1(), ..., r.cn()) (This is as close as we can say in Java to "construction and deconstruction form an embedding-projection pair between the space of records and the cartesian product space of their components.") The reason that the situation you describe is inevitable comes from the fact that for mutable components (such as arrays), in some cases we want to judge two records equal if they hold the same _array object_, and in other cases we want to judge two records equal if they hold arrays _with the same contents_.? And the language doesn't know which you want -- and it requires very little imagination to construct cases where each of these interpretations will be wrong. So unless we're willing to outlaw records that are not immutable all the way down, which would make records much less useful, we have to give people a way to say what they want.? This is just like how we let map classes decide whether equals() means "same map", or means "same mappings".? Both are valid interpretations, and both should be expressible.? Similarly, some records may want to expose the mutability of their components to clients, and others will want to launder using defensive copies.? All of these are expressible in the record model, just with different degrees of explicit code. If your component already chooses the answer for equals that you want -- such as ArrayList::equals comparing lists by contents, or arrays comparing by identity -- then you can do nothing.? Otherwise, you have to override the constructor, accessor, and equals in concert to preserve the invariant that deconstruction and reconstruction yields something equivalent to the original. So if you define a record whose components are mutable, or for whom you don't want the record equivalence semantics to be the same as the component equals, you're going to have to write some code -- but more importantly, you're going to have to tell your users what equality means for *your* record.? Just like you're supposed to specify what equality means for every class.? It might be tempting to say "records are just structural tuples, so there's nothing interesting to say about equality", but that turns out to be wishful thinking. Consistent with the choice we've made elsewhere (functional interfaces are nominal function types, not structural ones; sealed types are nominal union types, not structural ones), the rational choice for Java's product types is also nominal, not structural. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajcave at google.com Wed Aug 10 17:42:20 2022 From: ajcave at google.com (Andrew Cave) Date: Wed, 10 Aug 2022 11:42:20 -0600 Subject: Getters vs. public final fields in records In-Reply-To: <3b1e0165-4bf7-e20e-c722-45f3445b7f46@oracle.com> References: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> <3b1e0165-4bf7-e20e-c722-45f3445b7f46@oracle.com> Message-ID: > > These issues were well considered during the design already, and > these questions are not new. As mentioned, we did explicitly consider > public fields, and concluded that would be a bad idea. > Thanks for your response; before I respond to the rest, are these discussions recorded somewhere I can read? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Wed Aug 10 18:12:11 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Wed, 10 Aug 2022 14:12:11 -0400 Subject: Performance of Pattern Matching for switch (Third Preview) In-Reply-To: References: Message-ID: Yes, the current translation is deliberately unoptimized, and in fact, the most recent version has some translation issues that make it accidentally worse as well (such as some unnecessary boxing.) These are currently being worked on. On 8/10/2022 6:04 AM, Jordan Zimmerman wrote: > Hi Folks, > > I've been experimenting with?Pattern Matching for switch (Third > Preview). I noticed that the performance of these enhanced switches is > far worse than manual matching. Is this due to this only being a > preview and optimizations have yet to be done? Anyway, I thought I'd > mention what I found as an FYI. > > Here's the jmh benchmark I used: > https://gist.github.com/Randgalt/a68ceee62cd8127431cbe6e7afbfdf44 > > Here are the results: > > Benchmark ? ? ? ? ? ? ? ? ? ? ? ? Mode ?Cnt ? ? ?Score ? ? ? Error ?Units > TestEnhancedSwitch.testEnhancedSwitch ?thrpt ? ?5 ?30789.482 ? > 17667.365 ?ops/s > TestEnhancedSwitch.testManualSwitch ? ?thrpt ? ?5 ?44651.612 ? > ?5135.641 ?ops/s > > Cheers. > > -Jordan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.lahoda at oracle.com Thu Aug 11 12:26:05 2022 From: jan.lahoda at oracle.com (Jan Lahoda) Date: Thu, 11 Aug 2022 14:26:05 +0200 Subject: Performance of Pattern Matching for switch (Third Preview) In-Reply-To: References: Message-ID: <05031652-73d7-8b2c-1450-c31a05a59c15@oracle.com> Hi Jordan, Thanks for the report. Yes, the performance of various pattern matching switches is something that we'd like to improve, which is a task that will probably take a while. Currently, one PR relevant to your benchmark is: https://github.com/openjdk/jdk/pull/9779 Looking at the benchmark, I have a few comments/questions: 1. I see the "Data" generate the test List of a random length between 1000 and 2000, but as far as I can tell, different testcases will get a List of a different length. So the testcases are not really the same, as their input has a different length. Do I miss something here? 2. The actual content of the List is also random, but, again, the content is not the same for all the testcases, which I believe could skew the results (consider input data which could have a majority of Fruit.Apple, and a different set of data which would have a majority of Fruit.Pear - the tasks to solve this is not the same). The effect of this is probably limited, though. 3. The test uses 4 threads, but when I run it with this setting, the error margins are very wide, making the results much less reliable (per my understanding). Which may be a consequence of the limited amount (4 physical) of cores available on my laptop. I've tweaked the test to use input data of length 1000 for all cases, and new Random(0) to generate the data. The for one thread (testEnhancedSwitch uses the code from PR 9779, testEnhancedSwitchLegacy uses the code currently in the mainline, testManualSwitch is the same as in your testcase): TestEnhancedSwitch.testEnhancedSwitch??????? thrpt??? 5 95020.310 ?? 689.833? ops/s TestEnhancedSwitch.testEnhancedSwitchLegacy? thrpt??? 5 68175.714 ? 2245.512? ops/s TestEnhancedSwitch.testManualSwitch????????? thrpt??? 5 102640.203 ? 2384.880? ops/s And for two threads: TestEnhancedSwitch.testEnhancedSwitch??????? thrpt??? 5 47714.842 ? 2206.843? ops/s TestEnhancedSwitch.testEnhancedSwitchLegacy? thrpt??? 5? 47080.128 ? 1679.960? ops/s TestEnhancedSwitch.testManualSwitch????????? thrpt??? 5? 41116.334 ? 4938.590? ops/s (In the multi threaded mode, I wonder how much effect has the use of ConcurrentHashMap.) Thanks, ??? Jan On 10. 08. 22 12:04, Jordan Zimmerman wrote: > Hi Folks, > > I've been experimenting with?Pattern Matching for switch (Third > Preview). I noticed that the performance of these enhanced switches is > far worse than manual matching. Is this due to this only being a > preview and optimizations have yet to be done? Anyway, I thought I'd > mention what I found as an FYI. > > Here's the jmh benchmark I used: > https://gist.github.com/Randgalt/a68ceee62cd8127431cbe6e7afbfdf44 > > Here are the results: > > Benchmark ? ? ? ? ? ? ? ? ? ? ? ? Mode ?Cnt ? ? ?Score ? ? ? Error ?Units > TestEnhancedSwitch.testEnhancedSwitch ?thrpt ? ?5 ?30789.482 ? > 17667.365 ?ops/s > TestEnhancedSwitch.testManualSwitch ? ?thrpt ? ?5 ?44651.612 ? > ?5135.641 ?ops/s > > Cheers. > > -Jordan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordan at jordanzimmerman.com Thu Aug 11 13:09:49 2022 From: jordan at jordanzimmerman.com (Jordan Zimmerman) Date: Thu, 11 Aug 2022 14:09:49 +0100 Subject: Performance of Pattern Matching for switch (Third Preview) In-Reply-To: <05031652-73d7-8b2c-1450-c31a05a59c15@oracle.com> References: <05031652-73d7-8b2c-1450-c31a05a59c15@oracle.com> Message-ID: <50AB1F4A-E259-4A79-B028-4DA177713535@jordanzimmerman.com> Hi Jan, Thanks for the detailed reply. TBH I didn't spend much time on the test so your comments are appropriate. I wrote the test after JFR reported SwitchBootstrap.typeSwitch as a hotspot in a project I'm working on. I think different tests getting different lengths doesn't really poison the tests as both implementations have the same chances for list sizes and content. > I wonder how much effect has the use of ConcurrentHashMap I tried the test with both a simple HashMap and ConcurrentHashMap and the delta was similar as I recall. PR 9779 looks promising. Anyway, as a Java user I would expect that the compiler can write better code than I can manually FWIW. Cheers. -Jordan > On Aug 11, 2022, at 1:26 PM, Jan Lahoda wrote: > > Hi Jordan, > > > > Thanks for the report. Yes, the performance of various pattern matching switches is something that we'd like to improve, which is a task that will probably take a while. Currently, one PR relevant to your benchmark is: > > https://github.com/openjdk/jdk/pull/9779 > > Looking at the benchmark, I have a few comments/questions: > > 1. I see the "Data" generate the test List of a random length between 1000 and 2000, but as far as I can tell, different testcases will get a List of a different length. So the testcases are not really the same, as their input has a different length. Do I miss something here? > > 2. The actual content of the List is also random, but, again, the content is not the same for all the testcases, which I believe could skew the results (consider input data which could have a majority of Fruit.Apple, and a different set of data which would have a majority of Fruit.Pear - the tasks to solve this is not the same). The effect of this is probably limited, though. > > 3. The test uses 4 threads, but when I run it with this setting, the error margins are very wide, making the results much less reliable (per my understanding). Which may be a consequence of the limited amount (4 physical) of cores available on my laptop. > > > > I've tweaked the test to use input data of length 1000 for all cases, and new Random(0) to generate the data. > > > > The for one thread (testEnhancedSwitch uses the code from PR 9779, testEnhancedSwitchLegacy uses the code currently in the mainline, testManualSwitch is the same as in your testcase): > > TestEnhancedSwitch.testEnhancedSwitch thrpt 5 95020.310 ? 689.833 ops/s > TestEnhancedSwitch.testEnhancedSwitchLegacy thrpt 5 68175.714 ? 2245.512 ops/s > TestEnhancedSwitch.testManualSwitch thrpt 5 102640.203 ? 2384.880 ops/s > > And for two threads: > > TestEnhancedSwitch.testEnhancedSwitch thrpt 5 47714.842 ? 2206.843 ops/s > TestEnhancedSwitch.testEnhancedSwitchLegacy thrpt 5 47080.128 ? 1679.960 ops/s > TestEnhancedSwitch.testManualSwitch thrpt 5 41116.334 ? 4938.590 ops/s > > > > (In the multi threaded mode, I wonder how much effect has the use of ConcurrentHashMap.) > > > > Thanks, > > Jan > > > > On 10. 08. 22 12:04, Jordan Zimmerman wrote: >> Hi Folks, >> >> I've been experimenting with Pattern Matching for switch (Third Preview). I noticed that the performance of these enhanced switches is far worse than manual matching. Is this due to this only being a preview and optimizations have yet to be done? Anyway, I thought I'd mention what I found as an FYI. >> >> Here's the jmh benchmark I used: >> >> https://gist.github.com/Randgalt/a68ceee62cd8127431cbe6e7afbfdf44 >> >> Here are the results: >> >> Benchmark Mode Cnt Score Error Units >> TestEnhancedSwitch.testEnhancedSwitch thrpt 5 30789.482 ? 17667.365 ops/s >> TestEnhancedSwitch.testManualSwitch thrpt 5 44651.612 ? 5135.641 ops/s >> >> Cheers. >> >> -Jordan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.lahoda at oracle.com Thu Aug 11 16:25:10 2022 From: jan.lahoda at oracle.com (Jan Lahoda) Date: Thu, 11 Aug 2022 18:25:10 +0200 Subject: [External] : Re: Performance of Pattern Matching for switch (Third Preview) In-Reply-To: <50AB1F4A-E259-4A79-B028-4DA177713535@jordanzimmerman.com> References: <05031652-73d7-8b2c-1450-c31a05a59c15@oracle.com> <50AB1F4A-E259-4A79-B028-4DA177713535@jordanzimmerman.com> Message-ID: On 11. 08. 22 15:09, Jordan Zimmerman wrote: > Hi Jan, > > Thanks for the detailed reply. TBH I didn't spend much time on the > test so your comments are appropriate. I wrote the test after JFR > reported SwitchBootstrap.typeSwitch?as a hotspot in a project I'm > working on. I think different tests getting different lengths doesn't > really poison the tests as both implementations have the same chances > for list sizes and content. I think the length of the data has a fairly big effect. Because, each time the whole benchmark is executed, it will generated one set of data for testEnhancedSwitch, and another set of data for testManualSwitch, and perform the measurement on this (now static) data. So the data is not re-generated many times to average out the random differences. As a particular example (with '.thread(1)' + logging of the data size + improved PR 9779, but otherwise unmodified benchmark), I ran the whole benchmark several time, once I got: testEnhancedSwitch - data size: 1117 testManualSwitch - data size: 1510 results: TestEnhancedSwitch.testEnhancedSwitch? thrpt??? 5? 85437.814 ? 7840.590? ops/s TestEnhancedSwitch.testManualSwitch??? thrpt??? 5? 56473.669 ? 632.442? ops/s And another time, I got: testEnhancedSwitch - data size: 1988 testManualSwitch - data size: 1735 results: TestEnhancedSwitch.testEnhancedSwitch? thrpt??? 5? 43699.620 ? 6157.698? ops/s TestEnhancedSwitch.testManualSwitch??? thrpt??? 5? 50338.482 ? 6817.907? ops/s So,? the (random) data size apparently has a quite significant impact on the results. > > > I wonder how much effect has the use of ConcurrentHashMap > > I tried the test with both a simple HashMap and ConcurrentHashMap and > the delta was similar as I recall. Looking at the image from JFR, I see that the test is spending significantly more time in ConcurrentHashMap.get than in doTypeSwitch. So while that should not affect the relative order, it probably has an effect on the precision of the benchmark. Jan > > PR 9779 looks promising. Anyway, as a Java user I would expect that > the compiler can write better code than I can manually FWIW. > > Cheers. > > -Jordan > > >> On Aug 11, 2022, at 1:26 PM, Jan Lahoda wrote: >> >> Hi Jordan, >> >> >> Thanks for the report. Yes, the performance of various pattern >> matching switches is something that we'd like to improve, which is a >> task that will probably take a while. Currently, one PR relevant to >> your benchmark is: >> >> https://github.com/openjdk/jdk/pull/9779 >> >> >> Looking at the benchmark, I have a few comments/questions: >> >> 1. I see the "Data" generate the test List of a random length between >> 1000 and 2000, but as far as I can tell, different testcases will get >> a List of a different length. So the testcases are not really the >> same, as their input has a different length. Do I miss something here? >> >> 2. The actual content of the List is also random, but, again, the >> content is not the same for all the testcases, which I believe could >> skew the results (consider input data which could have a majority of >> Fruit.Apple, and a different set of data which would have a majority >> of Fruit.Pear - the tasks to solve this is not the same). The effect >> of this is probably limited, though. >> >> 3. The test uses 4 threads, but when I run it with this setting, the >> error margins are very wide, making the results much less reliable >> (per my understanding). Which may be a consequence of the limited >> amount (4 physical) of cores available on my laptop. >> >> >> I've tweaked the test to use input data of length 1000 for all cases, >> and new Random(0) to generate the data. >> >> >> The for one thread (testEnhancedSwitch uses the code from PR 9779, >> testEnhancedSwitchLegacy uses the code currently in the mainline, >> testManualSwitch is the same as in your testcase): >> >> TestEnhancedSwitch.testEnhancedSwitch thrpt??? 5?? 95020.310 ?? >> 689.833? ops/s >> TestEnhancedSwitch.testEnhancedSwitchLegacy? thrpt 5?? 68175.714 ? >> 2245.512? ops/s >> TestEnhancedSwitch.testManualSwitch????????? thrpt 5? 102640.203 ? >> 2384.880? ops/s >> >> And for two threads: >> >> TestEnhancedSwitch.testEnhancedSwitch thrpt??? 5? 47714.842 ? >> 2206.843? ops/s >> TestEnhancedSwitch.testEnhancedSwitchLegacy? thrpt 5? 47080.128 ? >> 1679.960? ops/s >> TestEnhancedSwitch.testManualSwitch????????? thrpt 5? 41116.334 ? >> 4938.590? ops/s >> >> >> (In the multi threaded mode, I wonder how much effect has the use of >> ConcurrentHashMap.) >> >> >> Thanks, >> >> ??? Jan >> >> >> On 10. 08. 22 12:04, Jordan Zimmerman wrote: >>> Hi Folks, >>> >>> I've been experimenting with?Pattern Matching for switch (Third >>> Preview). I noticed that the performance of these enhanced switches >>> is far worse than manual matching. Is this due to this only being a >>> preview and optimizations have yet to be done? Anyway, I thought I'd >>> mention what I found as an FYI. >>> >>> Here's the jmh benchmark I used: >>> https://gist.github.com/Randgalt/a68ceee62cd8127431cbe6e7afbfdf44 >>> >>> Here are the results: >>> >>> Benchmark ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Mode ?Cnt ?Score ? ? ? Error >>> ?Units >>> TestEnhancedSwitch.testEnhancedSwitch ?thrpt ? ?5 ?30789.482 ? >>> 17667.365 ?ops/s >>> TestEnhancedSwitch.testManualSwitch ? ?thrpt ? ?5 ?44651.612 ? >>> ?5135.641 ?ops/s >>> >>> Cheers. >>> >>> -Jordan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajcave at google.com Sat Aug 13 15:41:53 2022 From: ajcave at google.com (Andrew Cave) Date: Sat, 13 Aug 2022 09:41:53 -0600 Subject: Getters vs. public final fields in records In-Reply-To: <3b1e0165-4bf7-e20e-c722-45f3445b7f46@oracle.com> References: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> <3b1e0165-4bf7-e20e-c722-45f3445b7f46@oracle.com> Message-ID: Thanks for explaining; I think there are a few issues with your reasoning, however: As it turns out, the mathematical construct from Domain Theory known as > "embedding-projection pairs" (which is a formalization of approximation) > offer us a useful way to talk about the right thing. The key invariant > that records must adhere to (specified in the refined contract of > `Record::equals`) is that unpacking a record components, and repacking in a > new record, should yield an "equals" record: > > r.equals(new R(r.c0(), r.c1(), ..., r.cn()) > > (This is as close as we can say in Java to "construction and > deconstruction form an embedding-projection pair between the space of > records and the cartesian product space of their components.") > There are actually two conditions in the definition of an embedding-projection pair (I am using ?o? for function composition below); 1) p o e = id 2) e o p <= id Where <= means, roughly speaking, ?less defined than?; it is equality, but allowing the left-hand side to be partial, i.e. non-terminating on some inputs Following your ?translation? into Java (and ignoring several technicalities that make it problematic?), taking the constructor to be e and the destructors to be p, 1) means: (new R(a1, ?, an)).p1().equals(a1) ? and ? (new R(a1, ?, an)).pn().equals(an) 2) means (roughly) new R(r.p1(), ?, r.pn()).equals(r) *OR* new R(r.p1(), ? r.pn()) fails to terminate 2) is close to the condition you claimed, but look at 1) ? it *forces* the accessors to return the inputs to the constructor *according to the existing equals() defined on the inputs* ? so, when equals() on the inputs is reference equality, the accessor is forced to return the exact same reference. So, ok, this just means that the embedding-projection pair is not the right tool to make your case, that?s ok. (Although I?m not sure where you got the impression it would ? it is a very specific definition, used almost exclusively in the construction of recursive types in domain theory). Instead, your condition is simply e o p = id ? the accessors form a right inverse of the constructor, or the constructor is a left inverse of the accessors, fine. Note that if your constraint is the only one applied, that permits definitions of equality that satisfy properties like R(a,b).equals(R(b,a)) ? records can be made ?unordered? and the accessors can roll a die to decide which component to return, as well as more complex quotients, like R(a,b).equals(R(c,d)) exactly when a*d = b*c (the rational numbers). Maybe that?s intentional? You tell me. The reason that the situation you describe is inevitable comes from ? > > > If your component already chooses the answer for equals that you want -- > such as ArrayList::equals comparing lists by contents, or arrays comparing > by identity -- then you can do nothing. Otherwise, you have to override > the constructor, accessor, and equals in concert to preserve the invariant > that deconstruction and reconstruction yields something equivalent to the > original. > There are already at least two ways to express this in Java, the first being to wrap the components in a class that defines the equality you want, and the second is to use a plain old class instead of a record. What is missing from Java is the ability to express plain old Cartesian products where the behaviour of the accessors and equality is *known* and well-understood by all. So I would hardly call the current situation ?inevitable?. Consistent with the choice we've made elsewhere (functional interfaces are > nominal function types, not structural ones; sealed types are nominal union > types, not structural ones), the rational choice for Java's product types > is also nominal, not structural. > I think there is a confusion here between structural *types* and structural *equality*? Nominal typing means simply that types are declared equal when they have the same name, while structural typing means types are declared equal if they have the same ?structure? (?definition?). Records could force structural equality but remain nominally typed. Note also that Java has been a hybrid of structurally typed and nominally typed ever since the introduction of generics ? we no longer ?name? the type of ?lists of integers? IntList ? instead we compare the types List and List by comparing the structure of the types ? that SomeComplexTypeExpression is the same as SomeOtherComplexTypeExpression, allowing type variables inside SomeComplexTypeExpression to be instantiated along the way. Pair is actually a kind of structural record type!! So structural typing and nominal typing are not in conflict, they can and do work in harmony. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sat Aug 13 16:23:27 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Sat, 13 Aug 2022 12:23:27 -0400 Subject: Getters vs. public final fields in records In-Reply-To: References: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> <3b1e0165-4bf7-e20e-c722-45f3445b7f46@oracle.com> Message-ID: <38dc129e-10d8-c3a9-4744-d759ca78b3ba@oracle.com> > There are actually two conditions in the definition of an > embedding-projection pair (I am using ?o? for function composition below); > > 1) p o e = id > 2) e o p <= id > > Where <= means, roughly speaking, ?less defined than?; it is equality, > but allowing the left-hand side to be partial, i.e. non-terminating on > some inputs A good intuition for <= here is "approximates" or "has less information than."? The bottom value (which is used to describe nonterminating/throwing operations) is a uniformly terrible (zero information) approximation for everything.? But e-p pairs are not just about partiality; they are also about information loss.? (The discrete partial ordering, which allows partiality but no other approximation, is a common but less interesting case.)? This model was chosen to also support _normalization_ such as defensive copies (this is a form of throwing away information, namely identity), rounding, truncating (e.g., float to int), clamping ("if (x > n) x = n"), replacing invalid components with valid ones (e.g., replacing (_, 0) with (0, 1) in a Rational class), etc. > Following your ?translation? into Java (and ignoring several > technicalities that make it problematic?), taking the constructor to > be e and the destructors to be p, Except you've got your mappings the wrong way around.? The cartesian product space of the components is the "bigger" space; the constructor _projects_ from the unrestricted cartesian product space into the potentially-more-restricted record space, potentially losing information. The deconstructor (or vector of accessors) embeds back from the restricted record space to the unrestricted space, and does not lose information.? (This is easy to get backwards; I often have to work it out from scratch when I get confused, and I have even mistyped "embed" for "project" once in this mail already.) > 1) means: > > (new R(a1, ?, an)).p1().equals(a1) > ? and ? > (new R(a1, ?, an)).pn().equals(an) > > 2) means (roughly) > > new R(r.p1(), ?, r.pn > ()).equals(r) > *OR* new R(r.p1(), ? r.pn > ()) > fails to terminate Since you got your e/p backwards, you've got = / <= backwards too. > 2) is close to the condition you claimed, but look at 1) ? it *forces* > the accessors to return the inputs to the constructor *according to > the existing equals() defined on the inputs* ? so, when equals() on > the inputs is reference equality, the accessor is forced to return the > exact same reference. When you fix it, you get ??? (new R(a1 .. an).p1() <= a1 which says you get a potentially normalized version of a1 out, as expected. (I actually don't think you get as strong a component-by-component relation as the (1) and (2) relations you claim from the spec we have, but since I would have _liked_ to have gotten that, I'm not arguing with them.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajcave at google.com Sat Aug 13 18:20:34 2022 From: ajcave at google.com (Andrew Cave) Date: Sat, 13 Aug 2022 12:20:34 -0600 Subject: Getters vs. public final fields in records In-Reply-To: <38dc129e-10d8-c3a9-4744-d759ca78b3ba@oracle.com> References: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> <3b1e0165-4bf7-e20e-c722-45f3445b7f46@oracle.com> <38dc129e-10d8-c3a9-4744-d759ca78b3ba@oracle.com> Message-ID: > > > Except you've got your mappings the wrong way around. > Ok, let?s go with that: (new R(a)).p() <= a It?s difficult to be precise, because you haven?t spelled out the entire semantics (and I?m pretty sure it doesn?t exist), but, if we are able to do some kind of ?normalization?, we are presumably able to do the simplest imaginable kind: record R(Boolean b) { Boolean b() { return true; } bool equals(Object o) { return true; } } And also: record S(Boolean b) { Boolean b() { return false; } book equals(Object o) { return true; } } from which we can deduce: false = S(true).b() <= true And true = R(false).b() <= false So true <= false and false <= true, hence true = false, and since you interpret ?=? as .equals, we are forced to make .equals() on Boolean say that true is equal to false. The key here is that <= in ?R(a).p() <= a? is the one defined on the domain (type) of *a* and you don?t get to choose what it is as part of the definition of your record; it?s already chosen for you! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sat Aug 13 19:25:31 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Sat, 13 Aug 2022 15:25:31 -0400 Subject: Getters vs. public final fields in records In-Reply-To: References: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> <3b1e0165-4bf7-e20e-c722-45f3445b7f46@oracle.com> <38dc129e-10d8-c3a9-4744-d759ca78b3ba@oracle.com> Message-ID: <4ba1b9ab-f799-25f7-f187-856f6abd13f8@oracle.com> The definition of the approximation ordering <= is part of the e-p pair, along with the two sets being related and the two mappings.? You've chosen a different <= for each of R and S; in R-land, you say that true approximates everything; in S-land, you say that false approximates everything.? That's why you got the seeming inconsistency. In any case, we're getting pretty far afield and I'm still not getting any closer to seeing the point you're trying to make, other than "I don't like how you designed records, and I think my idea is better, and I want to argue about it."? Which is (a) not the charter of this list, (b) arguing about decisions that are already made, and (c) does not seem to be offering any new information about why your way is better that wasn't already in evidence when the feature was being designed.? So I think we're kind of off the road here. On 8/13/2022 2:20 PM, Andrew Cave wrote: > > > Except you've got your mappings the wrong way around. > > > Ok, let?s go with that: > > (new R(a)).p() <= a > > It?s difficult to be precise, because you haven?t spelled out the > entire semantics (and I?m pretty sure it doesn?t exist), but, if we > are able to do some kind of ?normalization?, we are presumably able to > do the simplest imaginable kind: > > record R(Boolean b) { > ? Boolean b() { return true; } > ? bool equals(Object o) { return true; } > } > > And also: > > record S(Boolean b) { > ? Boolean b() { return false; } > ? book equals(Object o) { return true; } > } > > from which we can deduce: > > false = S(true).b() <= true > > And > > true = R(false).b() <= false > > So true <= false and false <= true, hence true = false, and since you > interpret ?=? as .equals, we are forced to make .equals() on Boolean > say that true is equal to false. > > The key here is that <= in ?R(a).p() <= a? is the one defined on the > domain (type) of *a* and you don?t get to choose what it is as part of > the definition of your record; it?s already chosen for you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajcave at google.com Sat Aug 13 20:46:00 2022 From: ajcave at google.com (Andrew Cave) Date: Sat, 13 Aug 2022 14:46:00 -0600 Subject: Getters vs. public final fields in records In-Reply-To: <4ba1b9ab-f799-25f7-f187-856f6abd13f8@oracle.com> References: <17510a1f-2206-4c63-786d-ab639b0786b4@oracle.com> <3b1e0165-4bf7-e20e-c722-45f3445b7f46@oracle.com> <38dc129e-10d8-c3a9-4744-d759ca78b3ba@oracle.com> <4ba1b9ab-f799-25f7-f187-856f6abd13f8@oracle.com> Message-ID: > The definition of the approximation ordering <= is part of the e-p pair, > along with the two sets being related and the two mappings. > This is a very non-standard definition of an e-p pair; are you able to show any other piece of literature that agrees with your definition? Traditionally, the definition of <= comes with the domain, which is often the interpretation of a syntactic type (Boolean, in my example). I conjecture your definition becomes essentially useless when you consider any program larger than (new R(a,b)).p1() In any case, we're getting pretty far afield and I'm still not getting any > closer to seeing the point you're trying to make, other than "I don't like > how you designed records, and I think my idea is better, and I want to > argue about it." Which is (a) not the charter of this list, (b) arguing > about decisions that are already made, and (c) does not seem to be offering > any new information about why your way is better that wasn't already in > evidence when the feature was being designed. So I think we're kind of off > the road here. > Ok, there are a few points I want to make: 1) you brought up e-p pairs as a justification for the design of records. I have tried to explain that this justification is flawed. Your definition is also, and I am being as generous as possible, very non-standard. I would caution against using your non-standard definition to justify the design of any new features, and instead urge you to consult with programming language theory experts. 2) the Java definition of record is, at best, very unusual when compared to most outside work. It fails as a tool to represent ?a simple aggregation of data?, which supposedly was the goal. 3) to be specific about what I think the problem is: Java still has no effective/ergonomic way to express product types. Specifically, the laws that product types are expected to obey *do not* hold of records. And these laws are not just academic concerns, they affect real users? ability to grok code. Conversely, the extra ?flexibility? that Java records come with (overriding accessors, equals, and hashCode) can be achieved perfectly well with ordinary classes. The only gain is more succinct constructors, and we can discuss the possibility of short-form constructors independently of records. There is real value in knowing with certainty that equals() and ?accessors? are *standard*, not overridden, just like there is real value in knowing that fields are final. 4) it?s not too late. Unfortunately, Java has now reserved the keyword ?record? for something bizarre and there is no going back on that, but supposedly the purpose of the Amber project is to incubate new features, and nothing prevents us from doing it again under a new name (or a refinement of the existing concept) except, I speculate, a reluctance to make progress ?too quickly? (or rather, catching up to 1970s programming languages & theory ?too quickly?) and a reluctance to admit mistakes so soon after the release of records. 5) some things are not just matters of opinion from a single disgruntled user. e.g. The covariance of arrays was a mistake. Consider the possibility that records are similarly flawed in a way that is not yet well-known to the Java community. 6) Java is probably not going away any time soon and these choices are going to impact generations of users. Please be responsible stewards and be open to listening to and acting on criticism, particularly from experts outside your immediate community, and forget about the concept of ?too late?. Perhaps it?s a matter of either designing proper product types now, or 15 years from now when the issue is finally well understood by the Java community, which would truly be ?too late?. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Mon Aug 15 10:48:41 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Mon, 15 Aug 2022 10:48:41 +0000 Subject: Getters vs. public final fields in records Message-ID: <980A6A2F-9AC8-4477-934A-E9D284EE1D6C@oracle.com> If I understand correctly, you have two complaints: 1. The requirements from record classes that client code is able to rely on are not the ones you?d like them to be. In fact, as Brian hinted, the actual requirements are weaker than the ones you discussed. They?re just the ones needed to allow the reasonable use of records in collections (such as sets or map keys), pattern-matching, and serialization. 2. Some assumptions on records -- whatever they are -- are not strongly enforced in a way that the client code can "know for certain" that they hold. As for 1, Brian explained why, for reasons involving mutability in general and arrays in particular, we chose not to assume/enforce "tupleness". While we like treating records as tuples, they are not strictly formally so, but we believe the assumptions we chose to make are the right ones for a Java data carrier. If the need for stronger tuples is great, and if records are "flawed in a way that is not yet well-known to the Java community" then a new feature could hypothetically be added when the flaw *and its severity* are well known. At this point in time, the bar for adding another feature is a clear demonstration that it?s required to solve a *serious/frequent* problem encountered *in the field*. I don?t think anyone has the sufficient experience to clear that bar right now. And here's the important point: while the properties of records are objectively as you said, and might even be characterised as objective flaws, your alternative suffers from other objective flaws. You're suggesting that we prefer your objectively flawed design to our objectively flawed design; that's no longer objective, and we disagree with your prioritisation. BTW, it is also not objectively true that every error -- accepting your premise -- is always worth correcting. When domain experts are in disagreement, we listen to all of them but, objectively, must reject the opinion of some. You have been respectfully listened to and engaged, your suggestion was considered and for now rejected. Had it been otherwise, we?d be listening and rejecting other domain experts who wish the decision went the other way. The suggestion that we?re not being responsible stewards of the platform, or not listening, because after consideration we must rule in favour of one domain expert over another is disrespectful of the work we must do. As for 2, I will try not to get carried away because the subject has been of great interest to me for years, both in the field of programming languages and the field of formal methods. The general question is in which situations is the cost/benefit of soundness justified compared to unsound approaches. The formal methods community in particular has seen a shift in favour of unsoundness in recent decades. We try to find a balance in every particular case. More specifically, when it comes to Java, static guarantees are not always what they seem. For example, in your motivating example, it is not true that `if (r.foo != null) r.foo.bar();` is sufficient to "know for certain" that r.foo will not be null without other checks or a special configuration. The reason is that due to separate compilation and binary compatibility, the class of r at runtime might not be the same as its class at compilation time (Even if your alternative the Record class did not allow its subclasses to be defined with non-final fields, binary compatibility would still require you to check, at runtime, that r is a subclass of Record). So really we?re comparing the cost/benefit of different levels of unsoundness. The strong guarantees we provide in Java ? needed for, say, security ? often take a more dynamic form. Indeed, there is a stronger guarantee for records made in OpenJDK, and that is that an instance of a record class cannot be instantiated without calling its canonical constructor. Anyway, given that, on balance, the non-strict-tupleness of records was deemed insufficient to merit action at this time, I?d be happy to continue the discussion off-list, but there?s little point arguing further on amber-dev. ? Ron From brian.goetz at oracle.com Wed Aug 17 15:14:37 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Wed, 17 Aug 2022 11:14:37 -0400 Subject: Fwd: Bytecode transforming record class to be mutable In-Reply-To: <76e3801c-f049-dba7-c34c-b5f1f952abf6@gmail.com> References: <76e3801c-f049-dba7-c34c-b5f1f952abf6@gmail.com> Message-ID: <120f3d61-336d-fca6-ce6b-1aa04d906592@oracle.com> This was received on the -comments list. This is definitely an abuse, which may have been done out of ignorance (transform all the classes, without looking very carefully) or out of cleverness-toxicity (many people's judgment gets turned off when they think they're being clever.)? But generating "mutable records" is a serious party foul, and we should treat it the way normal communities treat party fouls -- with shame (and if that doesn't work, banishment.) The JVM has some awareness of record-ness (e.g., the Record attribute, primarily used to support reflection), but like with so many features, the JVM can't enforce every requirement that the language enforces (and often shouldn't.) Most ORMs have figured out how to work with immutable carriers. The EBean community should be encouraged to do the same, or to not try to work with records.? These attempts to "rewrite rules you don't like" may offer the author a brief frisson of perceived "sticking it to the man", but ultimately just pollute the community, to everyone's detriment. -------- Forwarded Message -------- Subject: Bytecode transforming record class to be mutable Date: Wed, 17 Aug 2022 16:50:09 +0200 From: Christian Beikov To: amber-spec-comments at openjdk.org I just saw that EBean does bytecode transformation of record class files in a way that feels odd to me and I seek an answer about whether this is legal from a JVM point of view. Apparently, it is possible to have a class file, where the class extends `java.lang.Record` and defines record component attributes (so it's a "record" like javac would create it), but with the following additional "features" which javac would not allow: * Make fields for record components non-final * Add additional fields that are not set through the canonical constructor, nor exposed through record component attributes To me, this seems illegal and I would have expected a JVM verification error. I would like to know if this is something that is "supported", which I can build upon, or if the lack of verification is a JVM bug. Are records just a Java language feature without JVM support?! I read that final fields of records are "truly final" and can't be changed even through reflection and assumed there must be special JVM support that makes sure records match the Java language semantics... Cross posting from StackOverflow: https://stackoverflow.com/questions/73377190/bytecode-transforming-record-class-to-be-mutable Regards, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Wed Aug 17 17:24:52 2022 From: forax at univ-mlv.fr (Remi Forax) Date: Wed, 17 Aug 2022 19:24:52 +0200 (CEST) Subject: Bytecode transforming record class to be mutable In-Reply-To: <120f3d61-336d-fca6-ce6b-1aa04d906592@oracle.com> References: <76e3801c-f049-dba7-c34c-b5f1f952abf6@gmail.com> <120f3d61-336d-fca6-ce6b-1aa04d906592@oracle.com> Message-ID: <873727994.22342398.1660757092653.JavaMail.zimbra@u-pem.fr> > From: "Brian Goetz" > To: "amber-dev" > Sent: Wednesday, August 17, 2022 5:14:37 PM > Subject: Fwd: Bytecode transforming record class to be mutable > This was received on the -comments list. > This is definitely an abuse, which may have been done out of ignorance > (transform all the classes, without looking very carefully) or out of > cleverness-toxicity (many people's judgment gets turned off when they think > they're being clever.) But generating "mutable records" is a serious party > foul, and we should treat it the way normal communities treat party fouls -- > with shame (and if that doesn't work, banishment.) > The JVM has some awareness of record-ness (e.g., the Record attribute, primarily > used to support reflection), but like with so many features, the JVM can't > enforce every requirement that the language enforces (and often shouldn't.) > Most ORMs have figured out how to work with immutable carriers. The EBean > community should be encouraged to do the same, or to not try to work with > records. These attempts to "rewrite rules you don't like" may offer the author > a brief frisson of perceived "sticking it to the man", but ultimately just > pollute the community, to everyone's detriment. Yes, the chapter 4.7 of the VM spec says that the record attribute is "not critical to correct interpretation of the class file by the Java Virtual Machine, but are either critical to correct interpretation of the class file by the class libraries of the Java SE Platform, or are useful for tools". [ https://docs.oracle.com/javase/specs/jvms/se18/html/jvms-4.html#jvms-4.7 | https://docs.oracle.com/javase/specs/jvms/se18/html/jvms-4.html#jvms-4.7 ] The VM should not reject such classfile but libraries consumming such classfile (including the reflection) may not work properly (this also disable the constantification of record fields by the JITs). I understand the appeal of creating chimera like this, it's quite fun to learn how of things work but providing a librarie for others to create such beasts is quite sad. R?mi > -------- Forwarded Message -------- > Subject: Bytecode transforming record class to be mutable > Date: Wed, 17 Aug 2022 16:50:09 +0200 > From: Christian Beikov [ mailto:christian.beikov at gmail.com | > ] > To: [ mailto:amber-spec-comments at openjdk.org | amber-spec-comments at openjdk.org > ] > I just saw that EBean does bytecode transformation of record class files in a > way that feels odd to me and I seek an answer about whether this is legal from > a JVM point of view. > Apparently, it is possible to have a class file, where the class extends > `java.lang.Record` and defines record component attributes (so it's a "record" > like javac would create it), but with the following additional "features" which > javac would not allow: > * Make fields for record components non-final > * Add additional fields that are not set through the canonical constructor, nor > exposed through record component attributes > To me, this seems illegal and I would have expected a JVM verification error. I > would like to know if this is something that is "supported", which I can build > upon, or if the lack of verification is a JVM bug. Are records just a Java > language feature without JVM support?! I read that final fields of records are > "truly final" and can't be changed even through reflection and assumed there > must be special JVM support that makes sure records match the Java language > semantics... > Cross posting from StackOverflow: [ > https://stackoverflow.com/questions/73377190/bytecode-transforming-record-class-to-be-mutable > | > https://stackoverflow.com/questions/73377190/bytecode-transforming-record-class-to-be-mutable > ] > Regards, > Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Wed Aug 17 17:30:53 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Wed, 17 Aug 2022 13:30:53 -0400 Subject: Bytecode transforming record class to be mutable In-Reply-To: <873727994.22342398.1660757092653.JavaMail.zimbra@u-pem.fr> References: <76e3801c-f049-dba7-c34c-b5f1f952abf6@gmail.com> <120f3d61-336d-fca6-ce6b-1aa04d906592@oracle.com> <873727994.22342398.1660757092653.JavaMail.zimbra@u-pem.fr> Message-ID: <949a6a10-9611-3beb-4b04-10b313d4e064@oracle.com> Yes, it's like genetic manipulation in science-fiction movies.? Creating half-dog, half-pig beasts might be cool, and might even be useful for something, but releasing them into the wild is unlikely to turn out well. > I understand the appeal of creating chimera like this, it's quite fun > to learn how of things work but providing a librarie for others to > create such beasts is quite sad. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sankar.singu at gmail.com Tue Aug 23 18:28:44 2022 From: sankar.singu at gmail.com (sankar singh) Date: Tue, 23 Aug 2022 23:58:44 +0530 Subject: Java Single Conditional Operator Message-ID: Hi Team, We are using ternary operator Can we use single conditional code like the below. *if (a>50)* * print("50 more")* *a>50?. print("50 more")* -- regards, Shankar.S -------------- next part -------------- An HTML attachment was scrubbed... URL: From talden at gmail.com Tue Aug 23 19:49:09 2022 From: talden at gmail.com (Aaron Scott-Boddendijk) Date: Wed, 24 Aug 2022 07:49:09 +1200 Subject: Java Single Conditional Operator In-Reply-To: References: Message-ID: Using a more idiomatic use of whitespace, isn't this just the difference between: | if (a > 50) print("50 more"); and | a > 50 ?. print("50 more"); That is, a two character difference for a new operator we can't use for something else (and that operator is highly recognisable as the 'null-safe navigation' operator in several languages - and proposed for Java way back in Java 6/7 days with Project Coin - or was it even earlier than that). If teams want to shorten code in this form, can't they just change formatting policies to allow a single-branch, single-statement body, if-statement to occupy a single line? -- Aaron Scott-Boddendijk On Wed, Aug 24, 2022 at 6:29 AM sankar singh wrote: > Hi Team, > > We are using ternary operator > > Can we use single conditional code like the below. > > *if (a>50)* > * print("50 more")* > > *a>50?. print("50 more")* > > > -- > > regards, > > Shankar.S > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Tue Aug 23 20:31:53 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Tue, 23 Aug 2022 20:31:53 +0000 Subject: Java Single Conditional Operator In-Reply-To: References: Message-ID: Let?s go back to why we have the ternary expression in the first place. Contrary to popular belief, it is *not* for syntactic concision. The difference between int x; If (b) x = a; else x = b; and Int x = b ? a : b Is that the latter is a _more constrained_ construct than the former. In the former, the then/else blocks of the if can contain arbitrary statements, and there is no way (other than DA/DU analysis) to capture the intention that we will assign to x in each arm (or even that there are both arms.) Whereas the latter is an _expression_, and expressions are _total_. So the latter makes use of a more constrained mechanism, and therefore allows for richer type-checking. The concision is merely a bonus. Your proposal conflates statements and expressions; the ternary conditional is an expression, whose arms are expressions, but you want to use a version of it for statements. And why? So you can type *two fewer characters*. It offers no additional type checking, introduces a gratuitously different way to do the same thing, and creates the Frankenstein monster of an operator that is really a statement. And it doesn?t result in more readable code; arguably, less readable, since we?re less use to spotting side-effects nestled in what look like expressions. If you mean ?if (condition) do stuff?, then there?s no shame in saying exactly that. On Aug 23, 2022, at 2:28 PM, sankar singh > wrote: Hi Team, We are using ternary operator Can we use single conditional code like the below. if (a>50) print("50 more") a>50?. print("50 more") -- regards, Shankar.S -------------- next part -------------- An HTML attachment was scrubbed... URL: From ice1000kotlin at foxmail.com Wed Aug 24 19:26:43 2022 From: ice1000kotlin at foxmail.com (=?utf-8?B?VGVzbGEgWmhhbmc=?=) Date: Wed, 24 Aug 2022 15:26:43 -0400 Subject: Block expressions with `yield` for its value? Message-ID: Hi all, Since switch expressions have block bodies with `yield` specifying its value, is it a good idea to generalize this to all blocks, and allow blocks as expressions? Regards, Tesla -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Thu Aug 25 21:04:02 2022 From: brian.goetz at oracle.com (Brian Goetz) Date: Thu, 25 Aug 2022 17:04:02 -0400 Subject: Block expressions with `yield` for its value? In-Reply-To: References: Message-ID: This came up when we did switch expressions.? Brief answer: no. Blocks already have a pretty strong association in Java -- something that holds statements.? Java developers would likely find it a jarring gear-change to start seeing these as expressions. Additionally, it creates new points of confusion, such as interaction of yield with break/continue, or nested block expressions (next request would be: can I yield to an uplevel expression?)? This is as far as we're comfortable going right now. On 8/24/2022 3:26 PM, Tesla Zhang wrote: > Hi all, > > > Since switch expressions have block bodies with `yield` specifying its > value, is it a good idea to generalize this to all blocks, and allow > blocks as expressions? > > ------------------------------------------------------------------------ > Regards, > Tesla -------------- next part -------------- An HTML attachment was scrubbed... URL: