From davidalayachew at gmail.com Fri Apr 5 01:19:47 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Thu, 4 Apr 2024 21:19:47 -0400 Subject: Some thoughts about the recent discussion on Member Patterns. Message-ID: Hello Amber Dev Team, I wanted to chime into the recent discussion about Member Patterns, but on a side topic. I decided to make this a separate thread to avoid distracting from the main discussion. In that discussion, I saw Brian Goetz make the following claim. > ## Exhaustiveness > > There is one last syntax question in front of us: how to > indicate that a set of patterns are (claimed to be) > exhaustive on a given match candidate type. We see this > with `Optional::of` and `Optional::empty`; it would be > sad if the compiler did not realize that these two > patterns together were exhaustive on `Optional`. This is > not a feature that will be used often, but not having it > at all will be repeated irritant. > > The best I've come up with is to call these `case` > patterns, where a set of `case` patterns for a given > match candidate type in a given class are asserted to be > an exhaustive set: > > ``` > class Optional { > static Optional of(T t) { ... } > static Optional empty() { ... } > > static case pattern of(T t) for Optional { ... } > static case pattern empty() for Optional { ... } > } > ``` > > Because they may not be truly exhaustive, `switch` > constructs will have to back up the static assumption of > exhaustiveness with a dynamic check, as we do for other > sets of exhaustive patterns that may have remainder. > > I've experimented with variants of `sealed` but it felt > more forced, so this is the best I've come up with. Later on, I saw Clement Charlin make the following response. > # Exhaustiveness > > The `case` modifier is fine, but the design should leave > room for `case LABEL` or `case (LABEL1, LABEL2)` to > delineate membership in exhaustive set(s), as a potential > future enhancement. To be explicit, I am assuming that we will eventually be able to exhaustively deconstruct Optional using something like the following. switch (someOptional) { case null -> System.out.println("The Optional itself is null?!"); case Optional.of(var a) -> System.out.println("Here is " + a); case Optional.empty() -> System.out.println("There's nothing here"); //no default clause needed because this is exhaustive } Once pattern-matching lands for normal classes, Optional is almost guaranteed to be the class most frequently deconstructed/pattern-matched. And since it does not use sealed types, it will really push a lot of people to model exhaustiveness as a set of methods. It's kind of frustrating. One article that captures my frustration well is from Alexis King -- "Parse, don't Validate" [1]. In it, she talks about the value of parsing data into a container object, with the intent of capturing and RETAINING validation via the type name. String validEmailAddress vs record ValidEmailAddress(String email) {/** Validation logic in the canonical constructor. */} The moment that the String validEmailAddress leaves the local scope where the validation occurred, its validation is no longer known except through tracing. But having a full-blown type allows you to assert that the validation has already been done, with no possible chance for misuse or mistakes. I guess my question is, in what instances would we say that modeling a set of patterns rather than a set of types would be better? The only argument that I can think of is conciseness. Or maybe we don't want to poison our type hierarchy with an edge case scenario. That point specifically seems to be the logic that Optional is following. My hesitation comes from the fact that pattern sets feel a little leaky. And leaky gives me distress when talking about exhaustiveness checking. With sealed types, if I want to implement SomeSealedInterface, I **MUST** acknowledge the question of exhaustiveness. There's no way for me to avoid it. My implementing type MUST be sealed, final, or non-final. And even if I implement/extend one of the implementing types of SomeSealedInterface, they either propogate the question, or they opt-out of exhaustiveness checking. Bullet proof! But adding a pattern to a class does not carry the same guarantee. If I add a new pattern that SHOULD have been part of the exhaustive set, but isn't, I have introduced a bug. This same bug is NOT POSSIBLE with sealed types. Hence, leaky. I guess my thoughts could be summed up as the following -- I feel like we are making an escape-hatch for Optional that I don't think would be worth the weight if there was any other way for Optional to be exhaustive. And if that is truly true, does that REALLY justify doing this? Feels tacked onto the side and leaky imo. And I will close by saying, I actually used to think this was a good idea. I have said so on multiple occasions on this exact mailing list. But the more that I think about it, the more that I see no real good reason to do this other than "Optional needs it". * Conciseness? Not a strong argument. Conciseness should be used to communicate a semantic difference, not just to shorten code. Think if statements vs ternary expressions. * Semantic difference? Barely, and not in a way that isn't otherwise possible. It's just when clauses with exhaustiveness attached tbh. You're better off modeling it explicitly. Again, Parse, don't validate. Thank you all for your time and help! David Alayachew [1] = https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Fri Apr 5 12:58:45 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Fri, 5 Apr 2024 08:58:45 -0400 Subject: Some thoughts about the recent discussion on Member Patterns. In-Reply-To: References: Message-ID: <3cce119b-36c4-465e-ad77-51cf725c9bfd@oracle.com> The question of "why don't you just turn Optional into an algebraic data type, and be done with it" is valid, and has been asked before (though, usually it is not asked so constructively.)? In many ways this is the "obvious" answer. So, why have we doggedly refused to do the "obvious" thing? Because object modeling is not the only consideration for important platform classes like Optional. In particular, Amber is not the only force that is moving the platform forward; there is also Valhalla.? And we would very much like Optional to be a value type, to gain all the benefits that can confer.? But the "why don't you just model it as the sum of None|Some(t)" approach is incompatible with that. So the reason we've "ignored the obvious" (and been willing to pay extra costs elsewhere) is that we are trying to balance both the object model and the runtime costs, so that people can "just use Optional" and get the best of both worlds. (This game is harder than it looks!) On 4/4/2024 9:19 PM, David Alayachew wrote: > Hello Amber Dev Team, > > I wanted to chime into the recent discussion about Member Patterns, > but on a side topic. I decided to make this a separate thread to avoid > distracting from the main discussion. > > In that discussion, I saw Brian Goetz make the following claim. > > > ## Exhaustiveness > > > > There is one last syntax question in front of us: how to > > indicate that a set of patterns are (claimed to be) > > exhaustive on a given match candidate type.? We see this > > with `Optional::of` and `Optional::empty`; it would be > > sad if the compiler did not realize that these two > > patterns together were exhaustive on `Optional`. This is > > not a feature that will be used often, but not having it > > at all will be repeated irritant. > > > > The best I've come up with is to call these `case` > > patterns, where a set of `case` patterns for a given > > match candidate type in a given class are asserted to be > > an exhaustive set: > > > > ``` > > class Optional { > > ? ? static Optional of(T t) { ... } > > ? ? static Optional empty() { ... } > > > > ? ? static case pattern of(T t) for Optional { ... } > > ? ? static case pattern empty() for Optional { ... } > > } > > ``` > > > > Because they may not be truly exhaustive, `switch` > > constructs will have to back up the static assumption of > > exhaustiveness with a dynamic check, as we do for other > > sets of exhaustive patterns that may have remainder. > > > > I've experimented with variants of `sealed` but it felt > > more forced, so this is the best I've come up with. > > Later on, I saw Clement Charlin make the following response. > > > # Exhaustiveness > > > > The `case` modifier is fine, but the design should leave > > room for `case LABEL` or `case (LABEL1, LABEL2)` to > > delineate membership in exhaustive set(s), as a potential > > future enhancement. > > To be explicit, I am assuming that we will eventually be able to > exhaustively deconstruct Optional using something like the following. > > switch (someOptional) > { > > ? ? case null ? ? ? ? ? ? ? -> System.out.println("The Optional itself > is null?!"); > ? ? case Optional.of(var a) -> System.out.println("Here is " + a); > ? ? case Optional.empty() ? -> System.out.println("There's nothing here"); > > ? ? //no default clause needed because this is exhaustive > > } > > Once pattern-matching lands for normal classes, Optional is almost > guaranteed to be the class most frequently > deconstructed/pattern-matched. And since it does not use sealed types, > it will really push a lot of people to model exhaustiveness as a set > of methods. > > It's kind of frustrating. > > One article that captures my frustration well is from Alexis King -- > "Parse, don't Validate" [1]. > > In it, she talks about the value of parsing data into a container > object, with the intent of capturing and RETAINING validation via the > type name. > > String validEmailAddress vs record ValidEmailAddress(String email) > {/** Validation logic in the canonical constructor. */} > > The moment that the String validEmailAddress leaves the local scope > where the validation occurred, its validation is no longer known > except through tracing. But having a full-blown type allows you to > assert that the validation has already been done, with no possible > chance for misuse or mistakes. > > I guess my question is, in what instances would we say that modeling a > set of patterns rather than a set of types would be better? The only > argument that I can think of is conciseness. Or maybe we don't want to > poison our type hierarchy with an edge case scenario. That point > specifically seems to be the logic that Optional is following. > > My hesitation comes from the fact that pattern sets feel a little > leaky. And leaky gives me distress when talking about exhaustiveness > checking. > > With sealed types, if I want to implement SomeSealedInterface, I > **MUST** acknowledge the question of exhaustiveness. There's no way > for me to avoid it. My implementing type MUST be sealed, final, or > non-final. And even if I implement/extend one of the implementing > types of SomeSealedInterface, they either propogate the question, or > they opt-out of exhaustiveness checking. Bullet proof! > > But adding a pattern to a class does not carry the same guarantee. If > I add a new pattern that SHOULD have been part of the exhaustive set, > but isn't, I have introduced a bug. This same bug is NOT POSSIBLE with > sealed types. Hence, leaky. > > I guess my thoughts could be summed up as the following -- I feel like > we are making an escape-hatch for Optional that I don't think would be > worth the weight if there was any other way for Optional to be > exhaustive. And if that is truly true, does that REALLY justify doing > this? Feels tacked onto the side and leaky imo. > > And I will close by saying, I actually used to think this was a good > idea. I have said so on multiple occasions on this exact mailing list. > But the more that I think about it, the more that I see no real good > reason to do this other than "Optional needs it". > > * Conciseness? Not a strong argument. Conciseness should be used to > communicate a semantic difference, not just to shorten code. Think if > statements vs ternary expressions. > > * Semantic difference? Barely, and not in a way that isn't otherwise > possible. It's just when clauses with exhaustiveness attached tbh. > You're better off modeling it explicitly. Again, Parse, don't validate. > > Thank you all for your time and help! > David Alayachew > > [1] = https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From redio.development at gmail.com Fri Apr 5 15:49:52 2024 From: redio.development at gmail.com (Red IO) Date: Fri, 5 Apr 2024 17:49:52 +0200 Subject: Some thoughts about the recent discussion on Member Patterns. In-Reply-To: References: Message-ID: Some thoughts on your concerns. On Fri, Apr 5, 2024, 03:20 David Alayachew wrote: > Hello Amber Dev Team, > > I wanted to chime into the recent discussion about Member Patterns, but on > a side topic. I decided to make this a separate thread to avoid distracting > from the main discussion. > > In that discussion, I saw Brian Goetz make the following claim. > > > ## Exhaustiveness > > > > There is one last syntax question in front of us: how to > > indicate that a set of patterns are (claimed to be) > > exhaustive on a given match candidate type. We see this > > with `Optional::of` and `Optional::empty`; it would be > > sad if the compiler did not realize that these two > > patterns together were exhaustive on `Optional`. This is > > not a feature that will be used often, but not having it > > at all will be repeated irritant. > > > > The best I've come up with is to call these `case` > > patterns, where a set of `case` patterns for a given > > match candidate type in a given class are asserted to be > > an exhaustive set: > > > > ``` > > class Optional { > > static Optional of(T t) { ... } > > static Optional empty() { ... } > > > > static case pattern of(T t) for Optional { ... } > > static case pattern empty() for Optional { ... } > > } > > ``` > > > > Because they may not be truly exhaustive, `switch` > > constructs will have to back up the static assumption of > > exhaustiveness with a dynamic check, as we do for other > > sets of exhaustive patterns that may have remainder. > > > > I've experimented with variants of `sealed` but it felt > > more forced, so this is the best I've come up with. > > Later on, I saw Clement Charlin make the following response. > > > # Exhaustiveness > > > > The `case` modifier is fine, but the design should leave > > room for `case LABEL` or `case (LABEL1, LABEL2)` to > > delineate membership in exhaustive set(s), as a potential > > future enhancement. > > To be explicit, I am assuming that we will eventually be able to > exhaustively deconstruct Optional using something like the following. > > switch (someOptional) > { > > case null -> System.out.println("The Optional itself is > null?!"); > case Optional.of(var a) -> System.out.println("Here is " + a); > case Optional.empty() -> System.out.println("There's nothing here"); > > //no default clause needed because this is exhaustive > > } > > Once pattern-matching lands for normal classes, Optional is almost > guaranteed to be the class most frequently deconstructed/pattern-matched. > And since it does not use sealed types, it will really push a lot of people > to model exhaustiveness as a set of methods. > > It's kind of frustrating. > > One article that captures my frustration well is from Alexis King -- > "Parse, don't Validate" [1]. > > In it, she talks about the value of parsing data into a container object, > with the intent of capturing and RETAINING validation via the type name. > > String validEmailAddress vs record ValidEmailAddress(String email) {/** > Validation logic in the canonical constructor. */} > > The moment that the String validEmailAddress leaves the local scope where > the validation occurred, its validation is no longer known except through > tracing. But having a full-blown type allows you to assert that the > validation has already been done, with no possible chance for misuse or > mistakes. > > I guess my question is, in what instances would we say that modeling a set > of patterns rather than a set of types would be better? The only argument > that I can think of is conciseness. Or maybe we don't want to poison our > type hierarchy with an edge case scenario. That point specifically seems to > be the logic that Optional is following. > > My hesitation comes from the fact that pattern sets feel a little leaky. > And leaky gives me distress when talking about exhaustiveness checking. > > With sealed types, if I want to implement SomeSealedInterface, I **MUST** > acknowledge the question of exhaustiveness. There's no way for me to avoid > it. My implementing type MUST be sealed, final, or non-final. And even if I > implement/extend one of the implementing types of SomeSealedInterface, they > either propogate the question, or they opt-out of exhaustiveness checking. > Bullet proof! > > But adding a pattern to a class does not carry the same guarantee. If I > add a new pattern that SHOULD have been part of the exhaustive set, but > isn't, I have introduced a bug. This same bug is NOT POSSIBLE with sealed > types. Hence, leaky. > I don't see how it would be leaky. Adding a case pattern to a class would/should be considered a braking change to the class same as adding a new type to a sealed types hierarchy. Causing compile errors on every previously exhaustive pattern match. > I guess my thoughts could be summed up as the following -- I feel like we > are making an escape-hatch for Optional that I don't think would be worth > the weight if there was any other way for Optional to be exhaustive. And if > that is truly true, does that REALLY justify doing this? Feels tacked onto > the side and leaky imo. > > And I will close by saying, I actually used to think this was a good idea. > I have said so on multiple occasions on this exact mailing list. But the > more that I think about it, the more that I see no real good reason to do > this other than "Optional needs it". > There is a need for objects to be distinguishable in different states without being split in different types. As much as I love type driven design it isn't the only case for pattern matching. Especially with things like value classes that are fundamentally incompatible with identity based type matching of sealed types. Also old classes that expose constructors can't be turned into a sealed hierarchy without braking api changes. But changing a class from 0 to X case patterns isn't a braking change similar to the addition of generics in the past. > * Conciseness? Not a strong argument. Conciseness should be used to > communicate a semantic difference, not just to shorten code. Think if > statements vs ternary expressions. > > * Semantic difference? Barely, and not in a way that isn't otherwise > possible. It's just when clauses with exhaustiveness attached tbh. You're > better off modeling it explicitly. Again, Parse, don't validate. > > Thank you all for your time and help! > David Alayachew > > [1] = https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/ > Great regards RedIODev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Fri Apr 5 15:56:25 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Fri, 5 Apr 2024 11:56:25 -0400 Subject: Some thoughts about the recent discussion on Member Patterns. In-Reply-To: <3cce119b-36c4-465e-ad77-51cf725c9bfd@oracle.com> References: <3cce119b-36c4-465e-ad77-51cf725c9bfd@oracle.com> Message-ID: This makes a lot of sense. So, value classes that would like to make use of pattern-matching are going to be the ones that need this window. That makes perfect sense to me. Ty vm! On Fri, Apr 5, 2024, 8:59?AM Brian Goetz wrote: > The question of "why don't you just turn Optional into an algebraic data > type, and be done with it" is valid, and has been asked before (though, > usually it is not asked so constructively.) In many ways this is the > "obvious" answer. > > So, why have we doggedly refused to do the "obvious" thing? Because > object modeling is not the only consideration for important platform > classes like Optional. > > In particular, Amber is not the only force that is moving the platform > forward; there is also Valhalla. And we would very much like Optional to > be a value type, to gain all the benefits that can confer. But the "why > don't you just model it as the sum of None|Some(t)" approach is > incompatible with that. > > So the reason we've "ignored the obvious" (and been willing to pay extra > costs elsewhere) is that we are trying to balance both the object model and > the runtime costs, so that people can "just use Optional" and get the best > of both worlds. > > (This game is harder than it looks!) > > > > On 4/4/2024 9:19 PM, David Alayachew wrote: > > Hello Amber Dev Team, > > I wanted to chime into the recent discussion about Member Patterns, but on > a side topic. I decided to make this a separate thread to avoid distracting > from the main discussion. > > In that discussion, I saw Brian Goetz make the following claim. > > > ## Exhaustiveness > > > > There is one last syntax question in front of us: how to > > indicate that a set of patterns are (claimed to be) > > exhaustive on a given match candidate type. We see this > > with `Optional::of` and `Optional::empty`; it would be > > sad if the compiler did not realize that these two > > patterns together were exhaustive on `Optional`. This is > > not a feature that will be used often, but not having it > > at all will be repeated irritant. > > > > The best I've come up with is to call these `case` > > patterns, where a set of `case` patterns for a given > > match candidate type in a given class are asserted to be > > an exhaustive set: > > > > ``` > > class Optional { > > static Optional of(T t) { ... } > > static Optional empty() { ... } > > > > static case pattern of(T t) for Optional { ... } > > static case pattern empty() for Optional { ... } > > } > > ``` > > > > Because they may not be truly exhaustive, `switch` > > constructs will have to back up the static assumption of > > exhaustiveness with a dynamic check, as we do for other > > sets of exhaustive patterns that may have remainder. > > > > I've experimented with variants of `sealed` but it felt > > more forced, so this is the best I've come up with. > > Later on, I saw Clement Charlin make the following response. > > > # Exhaustiveness > > > > The `case` modifier is fine, but the design should leave > > room for `case LABEL` or `case (LABEL1, LABEL2)` to > > delineate membership in exhaustive set(s), as a potential > > future enhancement. > > To be explicit, I am assuming that we will eventually be able to > exhaustively deconstruct Optional using something like the following. > > switch (someOptional) > { > > case null -> System.out.println("The Optional itself is > null?!"); > case Optional.of(var a) -> System.out.println("Here is " + a); > case Optional.empty() -> System.out.println("There's nothing here"); > > //no default clause needed because this is exhaustive > > } > > Once pattern-matching lands for normal classes, Optional is almost > guaranteed to be the class most frequently deconstructed/pattern-matched. > And since it does not use sealed types, it will really push a lot of people > to model exhaustiveness as a set of methods. > > It's kind of frustrating. > > One article that captures my frustration well is from Alexis King -- > "Parse, don't Validate" [1]. > > In it, she talks about the value of parsing data into a container object, > with the intent of capturing and RETAINING validation via the type name. > > String validEmailAddress vs record ValidEmailAddress(String email) {/** > Validation logic in the canonical constructor. */} > > The moment that the String validEmailAddress leaves the local scope where > the validation occurred, its validation is no longer known except through > tracing. But having a full-blown type allows you to assert that the > validation has already been done, with no possible chance for misuse or > mistakes. > > I guess my question is, in what instances would we say that modeling a set > of patterns rather than a set of types would be better? The only argument > that I can think of is conciseness. Or maybe we don't want to poison our > type hierarchy with an edge case scenario. That point specifically seems to > be the logic that Optional is following. > > My hesitation comes from the fact that pattern sets feel a little leaky. > And leaky gives me distress when talking about exhaustiveness checking. > > With sealed types, if I want to implement SomeSealedInterface, I **MUST** > acknowledge the question of exhaustiveness. There's no way for me to avoid > it. My implementing type MUST be sealed, final, or non-final. And even if I > implement/extend one of the implementing types of SomeSealedInterface, they > either propogate the question, or they opt-out of exhaustiveness checking. > Bullet proof! > > But adding a pattern to a class does not carry the same guarantee. If I > add a new pattern that SHOULD have been part of the exhaustive set, but > isn't, I have introduced a bug. This same bug is NOT POSSIBLE with sealed > types. Hence, leaky. > > I guess my thoughts could be summed up as the following -- I feel like we > are making an escape-hatch for Optional that I don't think would be worth > the weight if there was any other way for Optional to be exhaustive. And if > that is truly true, does that REALLY justify doing this? Feels tacked onto > the side and leaky imo. > > And I will close by saying, I actually used to think this was a good idea. > I have said so on multiple occasions on this exact mailing list. But the > more that I think about it, the more that I see no real good reason to do > this other than "Optional needs it". > > * Conciseness? Not a strong argument. Conciseness should be used to > communicate a semantic difference, not just to shorten code. Think if > statements vs ternary expressions. > > * Semantic difference? Barely, and not in a way that isn't otherwise > possible. It's just when clauses with exhaustiveness attached tbh. You're > better off modeling it explicitly. Again, Parse, don't validate. > > Thank you all for your time and help! > David Alayachew > > [1] = https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Fri Apr 5 16:01:10 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Fri, 5 Apr 2024 12:01:10 -0400 Subject: Some thoughts about the recent discussion on Member Patterns. In-Reply-To: References: Message-ID: Most of my points are now defunct, but I will answer just to clear my intent. When I said leaky, I was referring to the fact that it is very easy to add a pattern to the class, forget to add it to the accompanying pattern set, and then introduce a bug. I was more bemoaning the loss of a safety net that was available to us with sealed types, but after rereading again, that doesn't feel like a strong argument at all. As for the other points Brian proved them wrong. But I do appreciate you expanding on why they are wrong! The constructor point was very useful to communicate the migration incompatibility. On Fri, Apr 5, 2024, 11:50?AM Red IO wrote: > Some thoughts on your concerns. > > On Fri, Apr 5, 2024, 03:20 David Alayachew > wrote: > >> Hello Amber Dev Team, >> >> I wanted to chime into the recent discussion about Member Patterns, but >> on a side topic. I decided to make this a separate thread to avoid >> distracting from the main discussion. >> >> In that discussion, I saw Brian Goetz make the following claim. >> >> > ## Exhaustiveness >> > >> > There is one last syntax question in front of us: how to >> > indicate that a set of patterns are (claimed to be) >> > exhaustive on a given match candidate type. We see this >> > with `Optional::of` and `Optional::empty`; it would be >> > sad if the compiler did not realize that these two >> > patterns together were exhaustive on `Optional`. This is >> > not a feature that will be used often, but not having it >> > at all will be repeated irritant. >> > >> > The best I've come up with is to call these `case` >> > patterns, where a set of `case` patterns for a given >> > match candidate type in a given class are asserted to be >> > an exhaustive set: >> > >> > ``` >> > class Optional { >> > static Optional of(T t) { ... } >> > static Optional empty() { ... } >> > >> > static case pattern of(T t) for Optional { ... } >> > static case pattern empty() for Optional { ... } >> > } >> > ``` >> > >> > Because they may not be truly exhaustive, `switch` >> > constructs will have to back up the static assumption of >> > exhaustiveness with a dynamic check, as we do for other >> > sets of exhaustive patterns that may have remainder. >> > >> > I've experimented with variants of `sealed` but it felt >> > more forced, so this is the best I've come up with. >> >> Later on, I saw Clement Charlin make the following response. >> >> > # Exhaustiveness >> > >> > The `case` modifier is fine, but the design should leave >> > room for `case LABEL` or `case (LABEL1, LABEL2)` to >> > delineate membership in exhaustive set(s), as a potential >> > future enhancement. >> >> To be explicit, I am assuming that we will eventually be able to >> exhaustively deconstruct Optional using something like the following. >> >> switch (someOptional) >> { >> >> case null -> System.out.println("The Optional itself is >> null?!"); >> case Optional.of(var a) -> System.out.println("Here is " + a); >> case Optional.empty() -> System.out.println("There's nothing here"); >> >> //no default clause needed because this is exhaustive >> >> } >> >> Once pattern-matching lands for normal classes, Optional is almost >> guaranteed to be the class most frequently deconstructed/pattern-matched. >> And since it does not use sealed types, it will really push a lot of people >> to model exhaustiveness as a set of methods. >> >> It's kind of frustrating. >> >> One article that captures my frustration well is from Alexis King -- >> "Parse, don't Validate" [1]. >> >> In it, she talks about the value of parsing data into a container object, >> with the intent of capturing and RETAINING validation via the type name. >> >> String validEmailAddress vs record ValidEmailAddress(String email) {/** >> Validation logic in the canonical constructor. */} >> >> The moment that the String validEmailAddress leaves the local scope where >> the validation occurred, its validation is no longer known except through >> tracing. But having a full-blown type allows you to assert that the >> validation has already been done, with no possible chance for misuse or >> mistakes. >> >> I guess my question is, in what instances would we say that modeling a >> set of patterns rather than a set of types would be better? The only >> argument that I can think of is conciseness. Or maybe we don't want to >> poison our type hierarchy with an edge case scenario. That point >> specifically seems to be the logic that Optional is following. >> >> My hesitation comes from the fact that pattern sets feel a little leaky. >> And leaky gives me distress when talking about exhaustiveness checking. >> >> With sealed types, if I want to implement SomeSealedInterface, I **MUST** >> acknowledge the question of exhaustiveness. There's no way for me to avoid >> it. My implementing type MUST be sealed, final, or non-final. And even if I >> implement/extend one of the implementing types of SomeSealedInterface, they >> either propogate the question, or they opt-out of exhaustiveness checking. >> Bullet proof! >> >> But adding a pattern to a class does not carry the same guarantee. If I >> add a new pattern that SHOULD have been part of the exhaustive set, but >> isn't, I have introduced a bug. This same bug is NOT POSSIBLE with sealed >> types. Hence, leaky. >> > > I don't see how it would be leaky. Adding a case pattern to a class > would/should be considered a braking change to the class same as adding a > new type to a sealed types hierarchy. Causing compile errors on every > previously exhaustive pattern match. > > >> I guess my thoughts could be summed up as the following -- I feel like we >> are making an escape-hatch for Optional that I don't think would be worth >> the weight if there was any other way for Optional to be exhaustive. And if >> that is truly true, does that REALLY justify doing this? Feels tacked onto >> the side and leaky imo. >> >> And I will close by saying, I actually used to think this was a good >> idea. I have said so on multiple occasions on this exact mailing list. But >> the more that I think about it, the more that I see no real good reason to >> do this other than "Optional needs it". >> > > There is a need for objects to be distinguishable in different states > without being split in different types. As much as I love type driven > design it isn't the only case for pattern matching. Especially with things > like value classes that are fundamentally incompatible with identity based > type matching of sealed types. Also old classes that expose constructors > can't be turned into a sealed hierarchy without braking api changes. But > changing a class from 0 to X case patterns isn't a braking change similar > to the addition of generics in the past. > > >> * Conciseness? Not a strong argument. Conciseness should be used to >> communicate a semantic difference, not just to shorten code. Think if >> statements vs ternary expressions. >> >> * Semantic difference? Barely, and not in a way that isn't otherwise >> possible. It's just when clauses with exhaustiveness attached tbh. You're >> better off modeling it explicitly. Again, Parse, don't validate. >> >> Thank you all for your time and help! >> David Alayachew >> >> [1] = https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/ >> > > Great regards > RedIODev > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From netswengineer at yahoo.com Thu Apr 11 08:07:46 2024 From: netswengineer at yahoo.com (J.K.) Date: Thu, 11 Apr 2024 08:07:46 +0000 (UTC) Subject: Feedback on JEP 463: Implicitly Declared Classes and Instance Main Methods (Second Preview) References: <364925606.4782724.1712822866625.ref@mail.yahoo.com> Message-ID: <364925606.4782724.1712822866625@mail.yahoo.com> Hi, I would like to give my feedback on: JEP 463: Implicitly Declared Classes and Instance Main Methods (Second Preview) JEP draft: Implicitly Declared Classes and Instance Main Methods (Third Preview) The feedback is from the point of view of Java programmer/developer, not Java spec/JVM expert who knows all the internal Java implementation and restrictions. My understanding is that this feature is mainly for making learning Java by new programmers easier. However, I think we live in the computational era when more and more people from other fields not necessarily professional programmers have started using computers to solve their problems. Be it mathematics, natural sciences, medicine, finance, logistics ? On the other hand, professional programmers, like me, have started to write more and more smaller programs in the area of data science (DS) and machine learning (ML). Both groups of programmers write various smaller programs that import other packages and these programs are not typically imported by other programs so declaring a class name is a redundant ceremony. Still such programs need to be organized logically and systematically in packages by relevant topics, not all put in one big unnamed package. For this reason I believe it would be useful to be able to place this kind of programs in named packages. Rather than relying on a class name chosen by the host system, the name of the class would be the name of the source file with the class as it typically shall be anyway. The access modifier of such class would be public if any class member is public, otherwise the access modifier would be the implicit package modifier. Example: package mypackage; import org.pkg1.*; import org.pkg2.*; void main() { ?? DS ML code } I think it also fits better into the concept of ?Growing a program? gradually: 1.?????? an implicit class in unnamed package 2.?????? an implicit class in a named package 3.?????? an explicit class in a named package, i.e. a class with declared name possibly with modifiers and other class extension or interface implementations I see a practicality analogy with: JEP 330: Launch Single-File Source-Code Programs JEP 458: Launch Multi-File Source-Code Programs ? JEP 330 was also mainly for learning purpose, but JEP 458 made it more suitable for real world practical use cases. Thank you for reading my feedback. Jiri From ron.pressler at oracle.com Thu Apr 11 12:19:30 2024 From: ron.pressler at oracle.com (Ron Pressler) Date: Thu, 11 Apr 2024 12:19:30 +0000 Subject: Feedback on JEP 463: Implicitly Declared Classes and Instance Main Methods (Second Preview) In-Reply-To: <364925606.4782724.1712822866625@mail.yahoo.com> References: <364925606.4782724.1712822866625.ref@mail.yahoo.com> <364925606.4782724.1712822866625@mail.yahoo.com> Message-ID: <6C626BD0-7255-47A8-8368-82043C39949E@oracle.com> Thank you for trying out this feature! Implicit classes are not intended to save the typing effort of writing the line `public class MyClass` but the "conceptual burden" of declaring a class when a class is not needed ? there is at most one instance and no other class is interested in your methods. Given that this is what an implicit class expresses, can you explain the situations where you?ve tried to use an implicit class but felt the need to write such a program in a named package? ? Ron > On 11 Apr 2024, at 09:07, J.K. wrote: > > Hi, > > I would like to give my feedback on: > > JEP 463: Implicitly Declared Classes and Instance Main Methods (Second Preview) > > JEP draft: Implicitly Declared Classes and Instance Main Methods (Third Preview) > > The feedback is from the point of view of Java programmer/developer, not Java spec/JVM expert who knows all the internal Java implementation and restrictions. > > My understanding is that this feature is mainly for making learning Java by new programmers easier. > > However, I think we live in the computational era when more and more people from other fields not necessarily professional programmers have started using computers to solve their problems. Be it mathematics, natural sciences, medicine, finance, logistics ? > > On the other hand, professional programmers, like me, have started to write more and more smaller programs in the area of data science (DS) and machine learning (ML). > > Both groups of programmers write various smaller programs that import other packages and these programs are not typically imported by other programs so declaring a class name is a redundant ceremony. > > Still such programs need to be organized logically and systematically in packages by relevant topics, not all put in one big unnamed package. > > For this reason I believe it would be useful to be able to place this kind of programs in named packages. > > Rather than relying on a class name chosen by the host system, the name of the class would be the name of the source file with the class as it typically shall be anyway. > > The access modifier of such class would be public if any class member is public, otherwise the access modifier would be the implicit package modifier. > > Example: > > package mypackage; > > import org.pkg1.*; > import org.pkg2.*; > > void main() { > DS ML code > } > > I think it also fits better into the concept of ?Growing a program? gradually: > > 1. an implicit class in unnamed package > > 2. an implicit class in a named package > > 3. an explicit class in a named package, i.e. a class with declared name possibly with modifiers and other class extension or interface implementations > > I see a practicality analogy with: > > JEP 330: Launch Single-File Source-Code Programs > JEP 458: Launch Multi-File Source-Code Programs > > JEP 330 was also mainly for learning purpose, but JEP 458 made it more suitable for real world practical use cases. > > Thank you for reading my feedback. > > Jiri From netswengineer at yahoo.com Thu Apr 11 18:44:32 2024 From: netswengineer at yahoo.com (J.K.) Date: Thu, 11 Apr 2024 18:44:32 +0000 (UTC) Subject: Feedback on JEP 463: Implicitly Declared Classes and Instance Main Methods (Second Preview) References: <1759858289.4926013.1712861072433.ref@mail.yahoo.com> Message-ID: <1759858289.4926013.1712861072433@mail.yahoo.com> Thank you for your reply. I have a project in Eclipse IDE, where I have various maven dependencies on external DS/ML libraries such as Apache Commons Math, Smile, Tribuo or DJL. I also have my own classes that add some extra functionality to these external classes. Then I have multiple classes, programs, with main method that import some of these DS/MLclasses.They are not imported by other classes. These classes with main method read some data, do data cleaning and visualization, ML model training and saving the model. The classes are organized in different packages by problem topic, depending on what problem they analyze or train model for. It seems awkward to give them a class name explicitly since they don't model any business objects to be instantiated. They are just ad hoc programs for analysis, visualization, presentation. This is my case. I think other Java users may have similar need to create small Java programs with implicit names and organize them by topic in named packages e.g. reading a CSV file with statistical data and display the data in a chart, reading and processing biological data and printing the result etc. For example does this class need an explicit name declaration to make the code more readable: https://github.com/biojava/biojava/blob/master/biojava-core/src/main/java/demo/DemoSixFrameTranslation.java https://biojava.org/docs/api6.1.0/demo/class-use/DemoSixFrameTranslation.html Uses of Class demo.DemoSixFrameTranslation No usage of demo.DemoSixFrameTranslation But even for learning purposes. I remember when I was learning Java with IDE, I typically created a project with various packages: lesson1, lesson2 etc. where I put classes with main method where I experimented with different language features. I wanted to have them organized by topic in packages but I didn't instantiate them from other classes. One of the first thing I learned from the original Sun Java tutorial was, don't use the unnamed package.:-) Jiri On Thursday, April 11, 2024 at 02:19:42 PM GMT+2, Ron Pressler wrote: Thank you for trying out this feature! Implicit classes are not intended to save the typing effort of writing the line `public class MyClass` but the "conceptual burden" of declaring a class when a class is not needed ? there is at most one instance and no other class is interested in your methods. Given that this is what an implicit class expresses, can you explain the situations where you?ve tried to use an implicit class but felt the need to write such a program in a named package? ? Ron > On 11 Apr 2024, at 09:07, J.K. wrote: > > Hi, > > I would like to give my feedback on: > > JEP 463: Implicitly Declared Classes and Instance Main Methods (Second Preview) > > JEP draft: Implicitly Declared Classes and Instance Main Methods (Third Preview) > > The feedback is from the point of view of Java programmer/developer, not Java spec/JVM expert who knows all the internal Java implementation and restrictions. > > My understanding is that this feature is mainly for making learning Java by new programmers easier. > > However, I think we live in the computational era when more and more people from other fields not necessarily professional programmers have started using computers to solve their problems. Be it mathematics, natural sciences, medicine, finance, logistics ? > > On the other hand, professional programmers, like me, have started to write more and more smaller programs in the area of data science (DS) and machine learning (ML). > > Both groups of programmers write various smaller programs that import other packages and these programs are not typically imported by other programs so declaring a class name is a redundant ceremony. > > Still such programs need to be organized logically and systematically in packages by relevant topics, not all put in one big unnamed package. > > For this reason I believe it would be useful to be able to place this kind of programs in named packages. > > Rather than relying on a class name chosen by the host system, the name of the class would be the name of the source file with the class as it typically shall be anyway. > > The access modifier of such class would be public if any class member is public, otherwise the access modifier would be the implicit package modifier. > > Example: > > package mypackage; > > import org.pkg1.*; > import org.pkg2.*; > > void main() { >? ? DS ML code > } > > I think it also fits better into the concept of ?Growing a program? gradually: > > 1.? ? ? an implicit class in unnamed package > > 2.? ? ? an implicit class in a named package > > 3.? ? ? an explicit class in a named package, i.e. a class with declared name possibly with modifiers and other class extension or interface implementations > > I see a practicality analogy with: > > JEP 330: Launch Single-File Source-Code Programs > JEP 458: Launch Multi-File Source-Code Programs >? > JEP 330 was also mainly for learning purpose, but JEP 458 made it more suitable for real world practical use cases. > > Thank you for reading my feedback. > > Jiri From davidalayachew at gmail.com Mon Apr 15 03:32:42 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Sun, 14 Apr 2024 23:32:42 -0400 Subject: Could we add a lint warning for when the type parameter name overloads an existing type name? Message-ID: Hello Amber Dev Team, Client Lib Team, and Compiler Dev Team, In the vein of smoothing the on-ramp for beginners, one of the biggest pain points I have found when tutoring beginners is when they start to learn generics, and then do something like this. import java.util.*; public class abc { public static void main(final String[] args) { final Map cache = new HashMap<>(); final int result = updateCache(cache, "abc", 123); System.out.println(result); } public static Integer updateCache(final Map cache, final String key, final int value) { return cache.put(key, Integer.valueOf(value)); } } $ javac abc.java abc.java:21: error: cannot find symbol return cache.put(key, Integer.valueOf(value)); ^ symbol: method valueOf(int) location: class Object 1 error This type of error is the worst because it sends them on the wildest goose chase. They start coming up with the most eldritch deductions as to what could possibly be wrong, and they start actively unlearning stuff that they know to be true. When I finally show them what is wrong, it's already too late because (1) they start doubting the foundations because the "clearly correct" solution doesn't work for a non-obvious reason, and (2) they usually picked up some incorrect assumptions along the way that neither of us have realized yet. And the worst part is that, if you removed the "Integer.valueOf", and then changed the third parameter to be final Integer value instead of final int value, then the code compiles and works as expected. So, the student can actually go pretty far before code starts breaking. That is the absolute worst because they start turning this style of coding into a habit and then when it finally blows up, all of their progress has to be undone. They feel defeated, they hate the feature, they lose motivation, and now I have to work triple time to rebuild all of that. It's a terrible time for everyone involved. Could we add a lint option that turns this into a warning? Basically says, if you put an alias for a parameterized type that also happens to be an exact match for an already imported class, throw a warning upon compile? Then, this issue can be caught at compile time the second that they introduce it. When they ask me about the warning, I can immediately explain the problem, and this entire fiasco is avoided. Finally, I also type up this email because this can be kind of easy to miss when you are quickly cycling back and forth between students, trying to make sure everyone is good. I don't have that many now, but back when it was double digits, I distinctly remember falling into this pothole multiple times. Any thoughts on this feature? Thank you for your time and consideration! David Alayachew -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Mon Apr 15 03:56:38 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Sun, 14 Apr 2024 23:56:38 -0400 Subject: Could we add a lint warning for when the type parameter name overloads an existing type name? In-Reply-To: References: Message-ID: Whoops, I meant core-libs, not client libs. I always mix up the 2. Copy pasting below, so that people don't have to read the butchered version that pipermail spits out. Hello Amber Dev Team, Client Lib Team, and Compiler Dev Team, In the vein of smoothing the on-ramp for beginners, one of the biggest pain points I have found when tutoring beginners is when they start to learn generics, and then do something like this. import java.util.*; public class abc { public static void main(final String[] args) { final Map cache = new HashMap<>(); final int result = updateCache(cache, "abc", 123); System.out.println(result); } public static Integer updateCache(final Map cache, final String key, final int value) { return cache.put(key, Integer.valueOf(value)); } } $ javac abc.java abc.java:21: error: cannot find symbol return cache.put(key, Integer.valueOf(value)); ^ symbol: method valueOf(int) location: class Object 1 error This type of error is the worst because it sends them on the wildest goose chase. They start coming up with the most eldritch deductions as to what could possibly be wrong, and they start actively unlearning stuff that they know to be true. When I finally show them what is wrong, it's already too late because (1) they start doubting the foundations because the "clearly correct" solution doesn't work for a non-obvious reason, and (2) they usually picked up some incorrect assumptions along the way that neither of us have realized yet. And the worst part is that, if you removed the "Integer.valueOf", and then changed the third parameter to be final Integer value instead of final int value, then the code compiles and works as expected. So, the student can actually go pretty far before code starts breaking. That is the absolute worst because they start turning this style of coding into a habit and then when it finally blows up, all of their progress has to be undone. They feel defeated, they hate the feature, they lose motivation, and now I have to work triple time to rebuild all of that. It's a terrible time for everyone involved. Could we add a lint option that turns this into a warning? Basically says, if you put an alias for a parameterized type that also happens to be an exact match for an already imported class, throw a warning upon compile? Then, this issue can be caught at compile time the second that they introduce it. When they ask me about the warning, I can immediately explain the problem, and this entire fiasco is avoided. Finally, I also type up this email because this can be kind of easy to miss when you are quickly cycling back and forth between students, trying to make sure everyone is good. I don't have that many now, but back when it was double digits, I distinctly remember falling into this pothole multiple times. Any thoughts on this feature? Thank you for your time and consideration! David Alayachew -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Mon Apr 15 16:19:02 2024 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Mon, 15 Apr 2024 11:19:02 -0500 Subject: Could we add a lint warning for when the type parameter name overloads an existing type name? In-Reply-To: References: Message-ID: Replying only to amber-dev for now... I agree this seems like it would be an improvement. A slight variant of the problem that I've witnessed cause confusion multiple times is this (boiled down): public class MyClass { public void foo() { // easy to confuse what "T" means here } } Java's decision to make the shadowing of normal variables illegal was a nice "advance" at the time, and I've always wondered why it shouldn't carry over to generic type variables as well. It seems like the same basic principle would apply. Or maybe not? Generic type variables are a kind of combination of "type name" and "variable". While Java doesn't allow shadowing for variables, it does for type names - for example: public class MyClass { public class String { // no error here } public class T { // no error here } } So maybe Java was just making the conservative choice at the time. -Archie On Sun, Apr 14, 2024 at 10:33?PM David Alayachew wrote: > In the vein of smoothing the on-ramp for beginners, one of the biggest > pain points I have found when tutoring beginners is when they start to > learn generics, and then do something like this. > -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Mon Apr 15 21:27:15 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Mon, 15 Apr 2024 17:27:15 -0400 Subject: Could we add a lint warning for when the type parameter name overloads an existing type name? In-Reply-To: References: Message-ID: Hello Archie, Thank you for your response! > I agree this seems like it would be an improvement. A > slight variant of the problem that I've witnessed cause > confusion multiple times is this (boiled down): > > public class MyClass { > public void foo() { > // easy to confuse what "T" means here > } > } Yeah, I have been bitten by this one too. I would very much like it if this also got bundled into the warning concept that I proposed. Shadowing is useful since it facilitates migration, but it has the sharp edge of making future updates a little more error-prone. Leaving it as is might make sense, but those who want it would likely appreciate a guard rail in the form of a warning. > Java's decision to make the shadowing of normal variables illegal... I'm actually confused what you mean here. When you say shadowing of normal variables, I am interpreting something like the following example. public class tuv { int x = 0; public void someMethod() { int x = 2; System.out.println(x); //prints 2 } public void anotherMethod() { System.out.println(x); //prints 0 } } But the above example is legal Java. Could you clarify? Since the rest of your email builds on that point, I'm not able to follow along. Thank you for your help! David Alayachew On Mon, Apr 15, 2024 at 12:19?PM Archie Cobbs wrote: > Replying only to amber-dev for now... > > I agree this seems like it would be an improvement. A slight variant of > the problem that I've witnessed cause confusion multiple times is this > (boiled down): > > public class MyClass { > public void foo() { > // easy to confuse what "T" means here > } > } > > Java's decision to make the shadowing of normal variables illegal was a > nice "advance" at the time, and I've always wondered why it shouldn't carry > over to generic type variables as well. It seems like the same basic > principle would apply. > > Or maybe not? > > Generic type variables are a kind of combination of "type name" and > "variable". While Java doesn't allow shadowing for variables, it does for > type names - for example: > > public class MyClass { > public class String { // no error here > } > public class T { // no error here > } > } > > So maybe Java was just making the conservative choice at the time. > > -Archie > > On Sun, Apr 14, 2024 at 10:33?PM David Alayachew > wrote: > >> In the vein of smoothing the on-ramp for beginners, one of the biggest >> pain points I have found when tutoring beginners is when they start to >> learn generics, and then do something like this. >> > > > -- > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Mon Apr 15 22:01:21 2024 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Mon, 15 Apr 2024 17:01:21 -0500 Subject: Could we add a lint warning for when the type parameter name overloads an existing type name? In-Reply-To: References: Message-ID: On Mon, Apr 15, 2024 at 4:27?PM David Alayachew wrote: > > Java's decision to make the shadowing of normal variables illegal... > > I'm actually confused what you mean here. > Sorry, my fault for not being clear - Java only disallows "shadowing" for variables declared in the same method, e.g., like this: public void foo(int x) { while (true) { float x = 1; // error: variable x is already defined in method foo(int) x += 7; } } But it doesn't prevent variables from overshadowing fields as you point out. -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Mon Apr 15 23:10:47 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Mon, 15 Apr 2024 19:10:47 -0400 Subject: Could we add a lint warning for when the type parameter name overloads an existing type name? In-Reply-To: References: Message-ID: Thanks for clarifying. That makes sense. In that respect, yes, I can't imagine a world where that would be legal. Java absolutely made the right choice then. But following your second example, I see what you mean. Still, I think that, even then, a warning should have been made. Giving users a notice that they are treading into dangerous waters is exactly the type of guard rail that helps users make more confident decisions and increase adoption. I say, rather late than never. On Mon, Apr 15, 2024 at 6:01?PM Archie Cobbs wrote: > On Mon, Apr 15, 2024 at 4:27?PM David Alayachew > wrote: > >> > Java's decision to make the shadowing of normal variables illegal... >> >> I'm actually confused what you mean here. >> > > Sorry, my fault for not being clear - Java only disallows "shadowing" for > variables declared in the same method, e.g., like this: > > public void foo(int x) { > while (true) { > float x = 1; // error: variable x is already defined in > method foo(int) > x += 7; > } > } > > But it doesn't prevent variables from overshadowing fields as you point > out. > > -Archie > > -- > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tzengshinfu at gmail.com Tue Apr 16 16:42:11 2024 From: tzengshinfu at gmail.com (tzengshinfu) Date: Wed, 17 Apr 2024 00:42:11 +0800 Subject: Explaining a better way to Java language design to beginners. Message-ID: Hi, folks, I'm glad that "JEP 463: Implicitly Declared Classes and Instance main Methods" can help beginners start writing basic programs with limited skills and understanding. However, recently I came across this post [ https://twitter.com/relizarov/status/1767978534314627304] and I recall having asked similar questions here before [ https://mail.openjdk.org/pipermail/amber-dev/2023-October/008334.html]. But today, I am explaining Java's design to a beginner from the perspective of a semi-experienced developer. As we transitioned our tech stack from C# to the Java ecosystem, we also welcomed a new colleague who was previously accustomed to PHP. During my guidance on Java, we had the following conversation: New colleague: Why do we use `equals()` for string comparison instead of `==` or `===`? Me: Because `String` is an object. New colleague: Then why can we concatenate strings with the `+` operator? Me: The `+` operator is actually shorthand for `StringBuilder.append()`. >From an OOP perspective, you can also use `string1.concat(string2)`. New colleague: Why isn't the `+` operator used for `BigDecimal` addition here? Me: Because it's also an object... New colleague: Looking back, if strings are objects, shouldn't it be `String string1 = new String()` and then `string1.setValue(new char[] { 's', 't', 'r', 'i', 'n', 'g' })`? Me: That would be too cumbersome... By the way, how did you compare strings in PHP? New colleague: `===`, `strcmp()` is also an option, but less common. And after many more questions... Me: Don't ask, just write, let it become a part of you! Do you have any better explanations for Java's design for beginners? /* GET BETTER EVERY DAY */ -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Tue Apr 16 22:07:07 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Tue, 16 Apr 2024 18:07:07 -0400 Subject: Explaining a better way to Java language design to beginners. In-Reply-To: References: Message-ID: You missed a point. String is the single most used class in Java BY FAR. Nothing comes close to it. And the Java designers knew this would happen before hand. When you are the literal most used class in Java, you get a few privileges. Using + for concatenation is only one of them. String is informally known as a "Blessed Type". This is an informal name to represent types that are so frequently used or so useful that they get treated differently by the JVM. So, in short, blessed types get some extra love and some extra privileges. Since String is a "Blessed Type", it gets many privileges, one of which is the + operator. On Tue, Apr 16, 2024, 12:43?PM tzengshinfu wrote: > Hi, folks, > > I'm glad that "JEP 463: Implicitly Declared Classes and Instance main > Methods" can help beginners start writing basic programs with limited > skills and understanding. However, recently I came across this post [ > https://twitter.com/relizarov/status/1767978534314627304] and I recall > having asked similar questions here before [ > https://mail.openjdk.org/pipermail/amber-dev/2023-October/008334.html]. > But today, I am explaining Java's design to a beginner from the perspective > of a semi-experienced developer. > > As we transitioned our tech stack from C# to the Java ecosystem, we also > welcomed a new colleague who was previously accustomed to PHP. During my > guidance on Java, we had the following conversation: > > New colleague: Why do we use `equals()` for string comparison instead of > `==` or `===`? > Me: Because `String` is an object. > New colleague: Then why can we concatenate strings with the `+` operator? > Me: The `+` operator is actually shorthand for `StringBuilder.append()`. > From an OOP perspective, you can also use `string1.concat(string2)`. > New colleague: Why isn't the `+` operator used for `BigDecimal` addition > here? > Me: Because it's also an object... > New colleague: Looking back, if strings are objects, shouldn't it be > `String string1 = new String()` and then `string1.setValue(new char[] { > 's', 't', 'r', 'i', 'n', 'g' })`? > Me: That would be too cumbersome... By the way, how did you compare > strings in PHP? > New colleague: `===`, `strcmp()` is also an option, but less common. > And after many more questions... > Me: Don't ask, just write, let it become a part of you! > > Do you have any better explanations for Java's design for beginners? > > > /* GET BETTER EVERY DAY */ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.reinhold at oracle.com Wed Apr 17 18:58:14 2024 From: mark.reinhold at oracle.com (Mark Reinhold) Date: Wed, 17 Apr 2024 18:58:14 +0000 Subject: New candidate JEP: 476: Module Import Declarations (Preview) Message-ID: <20240417185812.C4D406C7F8E@eggemoggin.niobe.net> https://openjdk.org/jeps/476 Summary: Enhance the Java programming language with the ability to succinctly import all of the packages exported by a module. This simplifies the reuse of modular libraries, but does not require the importing code to be in a module itself. This is a preview language feature. - Mark From tzengshinfu at gmail.com Thu Apr 18 03:26:21 2024 From: tzengshinfu at gmail.com (tzengshinfu) Date: Thu, 18 Apr 2024 11:26:21 +0800 Subject: Explaining a better way to Java language design to beginners. In-Reply-To: References: Message-ID: Hello, David, Thank you for your reply. What troubles beginners is that types which intuitively could be operated with the + operator require the use of the add() method, while the String, treated as an object, can be concatenated with the + operator. This inconsistency in logic somewhat creates noise in the learning process. (Of course, it will seem natural after understanding more, but at the beginning of learning, it's all about confusion.) However, I understand that everything cannot be changed (given Java's position as a leading language). So, I'll reiterate that phrase: "Don't ask, just write, let it become a part of you!" /* GET BETTER EVERY DAY */ David Alayachew ? 2024?4?17? ?? ??6:07??? > You missed a point. > > String is the single most used class in Java BY FAR. Nothing comes close > to it. And the Java designers knew this would happen before hand. > > When you are the literal most used class in Java, you get a few > privileges. Using + for concatenation is only one of them. > > String is informally known as a "Blessed Type". This is an informal name > to represent types that are so frequently used or so useful that they get > treated differently by the JVM. > > So, in short, blessed types get some extra love and some extra privileges. > Since String is a "Blessed Type", it gets many privileges, one of which is > the + operator. > > > On Tue, Apr 16, 2024, 12:43?PM tzengshinfu wrote: > >> Hi, folks, >> >> I'm glad that "JEP 463: Implicitly Declared Classes and Instance main >> Methods" can help beginners start writing basic programs with limited >> skills and understanding. However, recently I came across this post [ >> https://twitter.com/relizarov/status/1767978534314627304] and I recall >> having asked similar questions here before [ >> https://mail.openjdk.org/pipermail/amber-dev/2023-October/008334.html]. >> But today, I am explaining Java's design to a beginner from the perspective >> of a semi-experienced developer. >> >> As we transitioned our tech stack from C# to the Java ecosystem, we also >> welcomed a new colleague who was previously accustomed to PHP. During my >> guidance on Java, we had the following conversation: >> >> New colleague: Why do we use `equals()` for string comparison instead of >> `==` or `===`? >> Me: Because `String` is an object. >> New colleague: Then why can we concatenate strings with the `+` operator? >> Me: The `+` operator is actually shorthand for `StringBuilder.append()`. >> From an OOP perspective, you can also use `string1.concat(string2)`. >> New colleague: Why isn't the `+` operator used for `BigDecimal` addition >> here? >> Me: Because it's also an object... >> New colleague: Looking back, if strings are objects, shouldn't it be >> `String string1 = new String()` and then `string1.setValue(new char[] { >> 's', 't', 'r', 'i', 'n', 'g' })`? >> Me: That would be too cumbersome... By the way, how did you compare >> strings in PHP? >> New colleague: `===`, `strcmp()` is also an option, but less common. >> And after many more questions... >> Me: Don't ask, just write, let it become a part of you! >> >> Do you have any better explanations for Java's design for beginners? >> >> >> /* GET BETTER EVERY DAY */ >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Thu Apr 18 03:46:32 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Wed, 17 Apr 2024 23:46:32 -0400 Subject: Explaining a better way to Java language design to beginners. In-Reply-To: References: Message-ID: Then I will add one more point on the pile -- the new String Templates feature means that you really don't need to use + any more for concatenation. In fact, it's usually easier to read without it. So you can completely side step this issue by telling them to use String Templates. You can even point out that using + is not the ideal anymore (if it ever was). https://docs.oracle.com/en/java/javase/21/language/string-templates.html#GUID-CAEF15BD-C3D1-43D4-B38F-1615B0B1699D And here is the JEP for String Templates -- https://openjdk.org/jeps/459 On Wed, Apr 17, 2024 at 11:26?PM tzengshinfu wrote: > Hello, David, > > Thank you for your reply. > > What troubles beginners is that types which intuitively could be operated > with the + operator require the use of the add() method, > while the String, treated as an object, can be concatenated with the + > operator. > This inconsistency in logic somewhat creates noise in the learning process. > (Of course, it will seem natural after understanding more, but at the > beginning of learning, it's all about confusion.) > However, I understand that everything cannot be changed (given Java's > position as a leading language). > > So, I'll reiterate that phrase: "Don't ask, just write, let it become a > part of you!" > > > /* GET BETTER EVERY DAY */ > > > > > David Alayachew ? 2024?4?17? ?? ??6:07??? > >> You missed a point. >> >> String is the single most used class in Java BY FAR. Nothing comes close >> to it. And the Java designers knew this would happen before hand. >> >> When you are the literal most used class in Java, you get a few >> privileges. Using + for concatenation is only one of them. >> >> String is informally known as a "Blessed Type". This is an informal name >> to represent types that are so frequently used or so useful that they get >> treated differently by the JVM. >> >> So, in short, blessed types get some extra love and some extra >> privileges. Since String is a "Blessed Type", it gets many privileges, one >> of which is the + operator. >> >> >> On Tue, Apr 16, 2024, 12:43?PM tzengshinfu wrote: >> >>> Hi, folks, >>> >>> I'm glad that "JEP 463: Implicitly Declared Classes and Instance main >>> Methods" can help beginners start writing basic programs with limited >>> skills and understanding. However, recently I came across this post [ >>> https://twitter.com/relizarov/status/1767978534314627304] and I recall >>> having asked similar questions here before [ >>> https://mail.openjdk.org/pipermail/amber-dev/2023-October/008334.html]. >>> But today, I am explaining Java's design to a beginner from the perspective >>> of a semi-experienced developer. >>> >>> As we transitioned our tech stack from C# to the Java ecosystem, we also >>> welcomed a new colleague who was previously accustomed to PHP. During my >>> guidance on Java, we had the following conversation: >>> >>> New colleague: Why do we use `equals()` for string comparison instead of >>> `==` or `===`? >>> Me: Because `String` is an object. >>> New colleague: Then why can we concatenate strings with the `+` operator? >>> Me: The `+` operator is actually shorthand for `StringBuilder.append()`. >>> From an OOP perspective, you can also use `string1.concat(string2)`. >>> New colleague: Why isn't the `+` operator used for `BigDecimal` addition >>> here? >>> Me: Because it's also an object... >>> New colleague: Looking back, if strings are objects, shouldn't it be >>> `String string1 = new String()` and then `string1.setValue(new char[] { >>> 's', 't', 'r', 'i', 'n', 'g' })`? >>> Me: That would be too cumbersome... By the way, how did you compare >>> strings in PHP? >>> New colleague: `===`, `strcmp()` is also an option, but less common. >>> And after many more questions... >>> Me: Don't ask, just write, let it become a part of you! >>> >>> Do you have any better explanations for Java's design for beginners? >>> >>> >>> /* GET BETTER EVERY DAY */ >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tzengshinfu at gmail.com Thu Apr 18 10:53:23 2024 From: tzengshinfu at gmail.com (tzengshinfu) Date: Thu, 18 Apr 2024 18:53:23 +0800 Subject: Explaining a better way to Java language design to beginners. In-Reply-To: References: Message-ID: This is where I admire Java - it doesn't blindly follow trends with new features, but instead carefully considers and improves upon them before releasing more refined functionalities. String Templates is one example, and Virtual Threads is another. /* GET BETTER EVERY DAY */ David Alayachew ? 2024?4?18? ?? ??11:47??? > Then I will add one more point on the pile -- the new String Templates > feature means that you really don't need to use + any more for > concatenation. In fact, it's usually easier to read without it. So you can > completely side step this issue by telling them to use String Templates. > You can even point out that using + is not the ideal anymore (if it ever > was). > > > https://docs.oracle.com/en/java/javase/21/language/string-templates.html#GUID-CAEF15BD-C3D1-43D4-B38F-1615B0B1699D > > And here is the JEP for String Templates -- https://openjdk.org/jeps/459 > > On Wed, Apr 17, 2024 at 11:26?PM tzengshinfu > wrote: > >> Hello, David, >> >> Thank you for your reply. >> >> What troubles beginners is that types which intuitively could be operated >> with the + operator require the use of the add() method, >> while the String, treated as an object, can be concatenated with the + >> operator. >> This inconsistency in logic somewhat creates noise in the learning >> process. >> (Of course, it will seem natural after understanding more, but at the >> beginning of learning, it's all about confusion.) >> However, I understand that everything cannot be changed (given Java's >> position as a leading language). >> >> So, I'll reiterate that phrase: "Don't ask, just write, let it become a >> part of you!" >> >> >> /* GET BETTER EVERY DAY */ >> >> >> >> >> David Alayachew ? 2024?4?17? ?? ??6:07??? >> >>> You missed a point. >>> >>> String is the single most used class in Java BY FAR. Nothing comes close >>> to it. And the Java designers knew this would happen before hand. >>> >>> When you are the literal most used class in Java, you get a few >>> privileges. Using + for concatenation is only one of them. >>> >>> String is informally known as a "Blessed Type". This is an informal name >>> to represent types that are so frequently used or so useful that they get >>> treated differently by the JVM. >>> >>> So, in short, blessed types get some extra love and some extra >>> privileges. Since String is a "Blessed Type", it gets many privileges, one >>> of which is the + operator. >>> >>> >>> On Tue, Apr 16, 2024, 12:43?PM tzengshinfu >>> wrote: >>> >>>> Hi, folks, >>>> >>>> I'm glad that "JEP 463: Implicitly Declared Classes and Instance main >>>> Methods" can help beginners start writing basic programs with limited >>>> skills and understanding. However, recently I came across this post [ >>>> https://twitter.com/relizarov/status/1767978534314627304] and I recall >>>> having asked similar questions here before [ >>>> https://mail.openjdk.org/pipermail/amber-dev/2023-October/008334.html]. >>>> But today, I am explaining Java's design to a beginner from the perspective >>>> of a semi-experienced developer. >>>> >>>> As we transitioned our tech stack from C# to the Java ecosystem, we >>>> also welcomed a new colleague who was previously accustomed to PHP. During >>>> my guidance on Java, we had the following conversation: >>>> >>>> New colleague: Why do we use `equals()` for string comparison instead >>>> of `==` or `===`? >>>> Me: Because `String` is an object. >>>> New colleague: Then why can we concatenate strings with the `+` >>>> operator? >>>> Me: The `+` operator is actually shorthand for >>>> `StringBuilder.append()`. From an OOP perspective, you can also use >>>> `string1.concat(string2)`. >>>> New colleague: Why isn't the `+` operator used for `BigDecimal` >>>> addition here? >>>> Me: Because it's also an object... >>>> New colleague: Looking back, if strings are objects, shouldn't it be >>>> `String string1 = new String()` and then `string1.setValue(new char[] { >>>> 's', 't', 'r', 'i', 'n', 'g' })`? >>>> Me: That would be too cumbersome... By the way, how did you compare >>>> strings in PHP? >>>> New colleague: `===`, `strcmp()` is also an option, but less common. >>>> And after many more questions... >>>> Me: Don't ask, just write, let it become a part of you! >>>> >>>> Do you have any better explanations for Java's design for beginners? >>>> >>>> >>>> /* GET BETTER EVERY DAY */ >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Sat Apr 20 17:30:14 2024 From: forax at univ-mlv.fr (Remi Forax) Date: Sat, 20 Apr 2024 19:30:14 +0200 (CEST) Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> Message-ID: <1423257949.9216174.1713634214152.JavaMail.zimbra@univ-eiffel.fr> > From: "attila kelemen85" > To: "amber-dev" > Cc: "Gavin Bierman" , "jdk-dev" > Sent: Saturday, April 20, 2024 5:49:22 PM > Subject: Re: New candidate JEP: 468: Derived Record Creation (Preview) > I have a backward compatibility concern about this JEP. Consider that I have the > following record: > `record MyRecord(int x, int y) { }` > One day I realize that I need that 3rd property which I want to add in a > backward compatible way, which I will do the following way: > ``` > record MyRecord(int x, int y, int z) { > public MyRecord(int x, int y) { > this(x, y, 0); > } > } > ``` > As far as I understand, this will still remain binary compatible. However, if I > didn't miss anything, then this JEP makes it non-source compatible, because > someone might wrote the following code: > ``` > var obj1 = new MyRecord(1, 2); > int z = 26; > var obj2 = obj1 with { y = z; } > ``` > If this code is compiled again, then it will compile without error, but while in > the first version `obj2.y == 26`, now `obj2.y == 0`. This seems rather nasty to > me because I was once bitten by this in Gradle (I can't recall if it was Groovy > or Kotlin, but it doesn't really matter), where this is a threat, and you have > to be very careful adding a new property in plugin extensions with a too > generic name. Even though Gradle scripts are far less prone to this, since > those scripts are usually a lot less complicated than normal code. > I saw in the JEP that on the left hand side of the assignment this issue can't > happen, but as far as I can see the above problem is not prevented. > My proposal would be to, instead of shadowing variables, raise a compile time > error when the property name would shadow another variable. Though that still > leaves the above example backward incompatible, but at least I would be > notified of it by the compiler, instead of the compiler silently creating a > bug. > Another solution would be that the shadowing is done in the opposite order, and > the `int z = 26;` shadows the record property (with a potential warning). In > this case it would be even source compatible, if I didn't miss something. > Attila Hello, i think you are pre-supposing a specific compiler translation strategy, but it can be compiled like this: MyRecord obj1 = new MyRecord(1, 2); int z = 26; Object carrier = indy(r); // create a carrier object by calling all the accessors // so the side effects are done before calling the block int y = z; MyRecord obj2 = indy(carrier, /*y:*/ y); // create an instance of MyRecord using 'y' and the values in the carrier With the carrier instances working like https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/lang/runtime/Carriers.java regards, R?mi > Mark Reinhold < [ mailto:mark.reinhold at oracle.com | mark.reinhold at oracle.com ] > > ezt ?rta (id?pont: 2024. febr. 28., Sze, 21:04): >> [ https://openjdk.org/jeps/468 | https://openjdk.org/jeps/468 ] >> Summary: Enhance the Java language with derived creation for >> records. Records are immutable objects, so developers frequently create >> new records from old records to model new data. Derived creation >> streamlines code by deriving a new record from an existing record, >> specifying only the components that are different. This is a preview >> language feature. >> - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Sat Apr 20 17:48:45 2024 From: forax at univ-mlv.fr (forax at univ-mlv.fr) Date: Sat, 20 Apr 2024 19:48:45 +0200 (CEST) Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> <1423257949.9216174.1713634214152.JavaMail.zimbra@univ-eiffel.fr> Message-ID: <530811152.9217129.1713635325528.JavaMail.zimbra@univ-eiffel.fr> > From: "attila kelemen85" > To: "Remi Forax" > Cc: "amber-dev" , "Gavin Bierman" > , "jdk-dev" > Sent: Saturday, April 20, 2024 7:35:34 PM > Subject: Re: New candidate JEP: 468: Derived Record Creation (Preview) > The compilation strategy doesn't matter. I'm just considering what the JEP > implies (at least given my understanding) about the meaning of the code. What > you are saying is relevant for binary compatibility (which I don't assume is > broken). My problem is that when the example code is recompiled against the new > version of `MyRecord`, then according to the JLS `MyProperty.z` shadows the > `int z = 26;` (the JEP explicitly states this). So, the compiler must produce > different code for the two variants of the record (otherwise it breaks the spec > written in the JEP). Okay, got it ! You are right, this is a serious issue, the semantics is different depending on if MyRecord has a field "z" or not. R?mi > Remi Forax < [ mailto:forax at univ-mlv.fr | forax at univ-mlv.fr ] > ezt ?rta > (id?pont: 2024. ?pr. 20., Szo, 19:30): >>> From: "attila kelemen85" < [ mailto:attila.kelemen85 at gmail.com | >>> attila.kelemen85 at gmail.com ] > >>> To: "amber-dev" < [ mailto:amber-dev at openjdk.org | amber-dev at openjdk.org ] > >>> Cc: "Gavin Bierman" < [ mailto:gavin.bierman at oracle.com | >>> gavin.bierman at oracle.com ] >, "jdk-dev" < [ mailto:jdk-dev at openjdk.org | >>> jdk-dev at openjdk.org ] > >>> Sent: Saturday, April 20, 2024 5:49:22 PM >>> Subject: Re: New candidate JEP: 468: Derived Record Creation (Preview) >>> I have a backward compatibility concern about this JEP. Consider that I have the >>> following record: >>> `record MyRecord(int x, int y) { }` >>> One day I realize that I need that 3rd property which I want to add in a >>> backward compatible way, which I will do the following way: >>> ``` >>> record MyRecord(int x, int y, int z) { >>> public MyRecord(int x, int y) { >>> this(x, y, 0); >>> } >>> } >>> ``` >>> As far as I understand, this will still remain binary compatible. However, if I >>> didn't miss anything, then this JEP makes it non-source compatible, because >>> someone might wrote the following code: >>> ``` >>> var obj1 = new MyRecord(1, 2); >>> int z = 26; >>> var obj2 = obj1 with { y = z; } >>> ``` >>> If this code is compiled again, then it will compile without error, but while in >>> the first version `obj2.y == 26`, now `obj2.y == 0`. This seems rather nasty to >>> me because I was once bitten by this in Gradle (I can't recall if it was Groovy >>> or Kotlin, but it doesn't really matter), where this is a threat, and you have >>> to be very careful adding a new property in plugin extensions with a too >>> generic name. Even though Gradle scripts are far less prone to this, since >>> those scripts are usually a lot less complicated than normal code. >>> I saw in the JEP that on the left hand side of the assignment this issue can't >>> happen, but as far as I can see the above problem is not prevented. >>> My proposal would be to, instead of shadowing variables, raise a compile time >>> error when the property name would shadow another variable. Though that still >>> leaves the above example backward incompatible, but at least I would be >>> notified of it by the compiler, instead of the compiler silently creating a >>> bug. >>> Another solution would be that the shadowing is done in the opposite order, and >>> the `int z = 26;` shadows the record property (with a potential warning). In >>> this case it would be even source compatible, if I didn't miss something. >>> Attila >> Hello, i think you are pre-supposing a specific compiler translation strategy, >> but it can be compiled like this: >> MyRecord obj1 = new MyRecord(1, 2); >> int z = 26; >> Object carrier = indy(r); // create a carrier object by calling all the >> accessors >> // so the side effects are done before calling the block >> int y = z; >> MyRecord obj2 = indy(carrier, /*y:*/ y); // create an instance of MyRecord using >> 'y' and the values in the carrier >> With the carrier instances working like [ >> https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/lang/runtime/Carriers.java >> | >> https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/lang/runtime/Carriers.java >> ] >> regards, >> R?mi >>> Mark Reinhold < [ mailto:mark.reinhold at oracle.com | mark.reinhold at oracle.com ] > >>> ezt ?rta (id?pont: 2024. febr. 28., Sze, 21:04): >>>> [ https://openjdk.org/jeps/468 | https://openjdk.org/jeps/468 ] >>>> Summary: Enhance the Java language with derived creation for >>>> records. Records are immutable objects, so developers frequently create >>>> new records from old records to model new data. Derived creation >>>> streamlines code by deriving a new record from an existing record, >>>> specifying only the components that are different. This is a preview >>>> language feature. >>>> - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Sat Apr 20 19:00:49 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Sat, 20 Apr 2024 15:00:49 -0400 Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: <530811152.9217129.1713635325528.JavaMail.zimbra@univ-eiffel.fr> References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> <1423257949.9216174.1713634214152.JavaMail.zimbra@univ-eiffel.fr> <530811152.9217129.1713635325528.JavaMail.zimbra@univ-eiffel.fr> Message-ID: Strongly agreed. In fact, I'm ok even if it is only a warning. But yes, this shadowing is definitely something I want to avoid at all costs. I understand why it is there, and while I still don't agree with the logic, I begrudgingly accept its existence. Still, in response to shadowing, I always prepend every single instance field I have with `this`, precisely so that I can avoid this problem in my normal code. If I am going to be forced to do this for yet another layer, I want at least a warning. Here is a slightly related post I made on amber-dev that addresses something in a similar vein. It is a different form of shadowing. https://mail.openjdk.org/pipermail/amber-dev/2024-April/008704.html On Sat, Apr 20, 2024 at 1:49?PM wrote: > > > ------------------------------ > > *From: *"attila kelemen85" > *To: *"Remi Forax" > *Cc: *"amber-dev" , "Gavin Bierman" < > gavin.bierman at oracle.com>, "jdk-dev" > *Sent: *Saturday, April 20, 2024 7:35:34 PM > *Subject: *Re: New candidate JEP: 468: Derived Record Creation (Preview) > > The compilation strategy doesn't matter. I'm just considering what the JEP > implies (at least given my understanding) about the meaning of the code. > What you are saying is relevant for binary compatibility (which I don't > assume is broken). My problem is that when the example code is recompiled > against the new version of `MyRecord`, then according to the JLS > `MyProperty.z` shadows the `int z = 26;` (the JEP explicitly states this). > So, the compiler must produce different code for the two variants of the > record (otherwise it breaks the spec written in the JEP). > > > Okay, got it ! > > You are right, this is a serious issue, the semantics is different > depending on if MyRecord has a field "z" or not. > > R?mi > > > Remi Forax ezt ?rta (id?pont: 2024. ?pr. 20., Szo, > 19:30): > >> >> >> ------------------------------ >> >> *From: *"attila kelemen85" >> *To: *"amber-dev" >> *Cc: *"Gavin Bierman" , "jdk-dev" < >> jdk-dev at openjdk.org> >> *Sent: *Saturday, April 20, 2024 5:49:22 PM >> *Subject: *Re: New candidate JEP: 468: Derived Record Creation (Preview) >> >> I have a backward compatibility concern about this JEP. Consider that I >> have the following record: >> `record MyRecord(int x, int y) { }` >> >> One day I realize that I need that 3rd property which I want to add in a >> backward compatible way, which I will do the following way: >> >> ``` >> record MyRecord(int x, int y, int z) { >> public MyRecord(int x, int y) { >> this(x, y, 0); >> } >> } >> ``` >> >> As far as I understand, this will still remain binary compatible. >> However, if I didn't miss anything, then this JEP makes it non-source >> compatible, because someone might wrote the following code: >> >> ``` >> var obj1 = new MyRecord(1, 2); >> int z = 26; >> var obj2 = obj1 with { y = z; } >> ``` >> >> If this code is compiled again, then it will compile without error, but >> while in the first version `obj2.y == 26`, now `obj2.y == 0`. This seems >> rather nasty to me because I was once bitten by this in Gradle (I can't >> recall if it was Groovy or Kotlin, but it doesn't really matter), where >> this is a threat, and you have to be very careful adding a new property in >> plugin extensions with a too generic name. Even though Gradle scripts are >> far less prone to this, since those scripts are usually a lot less >> complicated than normal code. >> >> I saw in the JEP that on the left hand side of the assignment this issue >> can't happen, but as far as I can see the above problem is not prevented. >> >> My proposal would be to, instead of shadowing variables, raise a compile >> time error when the property name would shadow another variable. Though >> that still leaves the above example backward incompatible, but at least I >> would be notified of it by the compiler, instead of the compiler silently >> creating a bug. >> >> Another solution would be that the shadowing is done in the opposite >> order, and the `int z = 26;` shadows the record property (with a potential >> warning). In this case it would be even source compatible, if I didn't miss >> something. >> >> Attila >> >> >> Hello, i think you are pre-supposing a specific compiler translation >> strategy, but it can be compiled like this: >> >> MyRecord obj1 = new MyRecord(1, 2); >> int z = 26; >> >> Object carrier = indy(r); // create a carrier object by calling all the >> accessors >> // so the side effects are done >> before calling the block >> int y = z; >> MyRecord obj2 = indy(carrier, /*y:*/ y); // create an instance of >> MyRecord using 'y' and the values in the carrier >> >> With the carrier instances working like >> https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/lang/runtime/Carriers.java >> >> regards, >> R?mi >> >> >> Mark Reinhold ezt ?rta (id?pont: 2024. febr. >> 28., Sze, 21:04): >> >>> https://openjdk.org/jeps/468 >>> >>> Summary: Enhance the Java language with derived creation for >>> records. Records are immutable objects, so developers frequently create >>> new records from old records to model new data. Derived creation >>> streamlines code by deriving a new record from an existing record, >>> specifying only the components that are different. This is a preview >>> language feature. >>> >>> - Mark >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From attila.kelemen85 at gmail.com Sat Apr 20 15:49:22 2024 From: attila.kelemen85 at gmail.com (Attila Kelemen) Date: Sat, 20 Apr 2024 17:49:22 +0200 Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> Message-ID: I have a backward compatibility concern about this JEP. Consider that I have the following record: `record MyRecord(int x, int y) { }` One day I realize that I need that 3rd property which I want to add in a backward compatible way, which I will do the following way: ``` record MyRecord(int x, int y, int z) { public MyRecord(int x, int y) { this(x, y, 0); } } ``` As far as I understand, this will still remain binary compatible. However, if I didn't miss anything, then this JEP makes it non-source compatible, because someone might wrote the following code: ``` var obj1 = new MyRecord(1, 2); int z = 26; var obj2 = obj1 with { y = z; } ``` If this code is compiled again, then it will compile without error, but while in the first version `obj2.y == 26`, now `obj2.y == 0`. This seems rather nasty to me because I was once bitten by this in Gradle (I can't recall if it was Groovy or Kotlin, but it doesn't really matter), where this is a threat, and you have to be very careful adding a new property in plugin extensions with a too generic name. Even though Gradle scripts are far less prone to this, since those scripts are usually a lot less complicated than normal code. I saw in the JEP that on the left hand side of the assignment this issue can't happen, but as far as I can see the above problem is not prevented. My proposal would be to, instead of shadowing variables, raise a compile time error when the property name would shadow another variable. Though that still leaves the above example backward incompatible, but at least I would be notified of it by the compiler, instead of the compiler silently creating a bug. Another solution would be that the shadowing is done in the opposite order, and the `int z = 26;` shadows the record property (with a potential warning). In this case it would be even source compatible, if I didn't miss something. Attila Mark Reinhold ezt ?rta (id?pont: 2024. febr. 28., Sze, 21:04): > https://openjdk.org/jeps/468 > > Summary: Enhance the Java language with derived creation for > records. Records are immutable objects, so developers frequently create > new records from old records to model new data. Derived creation > streamlines code by deriving a new record from an existing record, > specifying only the components that are different. This is a preview > language feature. > > - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From attila.kelemen85 at gmail.com Sat Apr 20 17:35:34 2024 From: attila.kelemen85 at gmail.com (Attila Kelemen) Date: Sat, 20 Apr 2024 19:35:34 +0200 Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: <1423257949.9216174.1713634214152.JavaMail.zimbra@univ-eiffel.fr> References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> <1423257949.9216174.1713634214152.JavaMail.zimbra@univ-eiffel.fr> Message-ID: The compilation strategy doesn't matter. I'm just considering what the JEP implies (at least given my understanding) about the meaning of the code. What you are saying is relevant for binary compatibility (which I don't assume is broken). My problem is that when the example code is recompiled against the new version of `MyRecord`, then according to the JLS `MyProperty.z` shadows the `int z = 26;` (the JEP explicitly states this). So, the compiler must produce different code for the two variants of the record (otherwise it breaks the spec written in the JEP). Remi Forax ezt ?rta (id?pont: 2024. ?pr. 20., Szo, 19:30): > > > ------------------------------ > > *From: *"attila kelemen85" > *To: *"amber-dev" > *Cc: *"Gavin Bierman" , "jdk-dev" < > jdk-dev at openjdk.org> > *Sent: *Saturday, April 20, 2024 5:49:22 PM > *Subject: *Re: New candidate JEP: 468: Derived Record Creation (Preview) > > I have a backward compatibility concern about this JEP. Consider that I > have the following record: > `record MyRecord(int x, int y) { }` > > One day I realize that I need that 3rd property which I want to add in a > backward compatible way, which I will do the following way: > > ``` > record MyRecord(int x, int y, int z) { > public MyRecord(int x, int y) { > this(x, y, 0); > } > } > ``` > > As far as I understand, this will still remain binary compatible. However, > if I didn't miss anything, then this JEP makes it non-source compatible, > because someone might wrote the following code: > > ``` > var obj1 = new MyRecord(1, 2); > int z = 26; > var obj2 = obj1 with { y = z; } > ``` > > If this code is compiled again, then it will compile without error, but > while in the first version `obj2.y == 26`, now `obj2.y == 0`. This seems > rather nasty to me because I was once bitten by this in Gradle (I can't > recall if it was Groovy or Kotlin, but it doesn't really matter), where > this is a threat, and you have to be very careful adding a new property in > plugin extensions with a too generic name. Even though Gradle scripts are > far less prone to this, since those scripts are usually a lot less > complicated than normal code. > > I saw in the JEP that on the left hand side of the assignment this issue > can't happen, but as far as I can see the above problem is not prevented. > > My proposal would be to, instead of shadowing variables, raise a compile > time error when the property name would shadow another variable. Though > that still leaves the above example backward incompatible, but at least I > would be notified of it by the compiler, instead of the compiler silently > creating a bug. > > Another solution would be that the shadowing is done in the opposite > order, and the `int z = 26;` shadows the record property (with a potential > warning). In this case it would be even source compatible, if I didn't miss > something. > > Attila > > > Hello, i think you are pre-supposing a specific compiler translation > strategy, but it can be compiled like this: > > MyRecord obj1 = new MyRecord(1, 2); > int z = 26; > > Object carrier = indy(r); // create a carrier object by calling all the > accessors > // so the side effects are done > before calling the block > int y = z; > MyRecord obj2 = indy(carrier, /*y:*/ y); // create an instance of > MyRecord using 'y' and the values in the carrier > > With the carrier instances working like > https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/lang/runtime/Carriers.java > > regards, > R?mi > > > Mark Reinhold ezt ?rta (id?pont: 2024. febr. > 28., Sze, 21:04): > >> https://openjdk.org/jeps/468 >> >> Summary: Enhance the Java language with derived creation for >> records. Records are immutable objects, so developers frequently create >> new records from old records to model new data. Derived creation >> streamlines code by deriving a new record from an existing record, >> specifying only the components that are different. This is a preview >> language feature. >> >> - Mark > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duke at openjdk.org Mon Apr 22 01:40:49 2024 From: duke at openjdk.org (Ram Anvesh Reddy) Date: Mon, 22 Apr 2024 01:40:49 GMT Subject: [amber-docs] RFR: Update pattern-match-object-model.md Message-ID: Correct collection literal deconstruction pattern example syntax ( , was used instead of : ) ------------- Commit messages: - Update pattern-match-object-model.md Changes: https://git.openjdk.org/amber-docs/pull/24/files Webrev: https://webrevs.openjdk.org/?repo=amber-docs&pr=24&range=00 Stats: 2 lines in 1 file changed: 0 ins; 0 del; 2 mod Patch: https://git.openjdk.org/amber-docs/pull/24.diff Fetch: git fetch https://git.openjdk.org/amber-docs.git pull/24/head:pull/24 PR: https://git.openjdk.org/amber-docs/pull/24 From duke at openjdk.org Mon Apr 22 08:56:42 2024 From: duke at openjdk.org (Ram Anvesh Reddy) Date: Mon, 22 Apr 2024 08:56:42 GMT Subject: [amber-docs] Integrated: Update pattern-match-object-model.md In-Reply-To: References: Message-ID: On Mon, 22 Apr 2024 01:36:58 GMT, Ram Anvesh Reddy wrote: > Correct collection literal deconstruction pattern example syntax ( , was used instead of : ) This pull request has now been integrated. Changeset: 35d023ff Author: Ram Anvesh Reddy Committer: Gavin Bierman URL: https://git.openjdk.org/amber-docs/commit/35d023ff85394d33ea666c3ec77cc9e9ea9ce229 Stats: 2 lines in 1 file changed: 0 ins; 0 del; 2 mod Update pattern-match-object-model.md ------------- PR: https://git.openjdk.org/amber-docs/pull/24 From brian.goetz at oracle.com Tue Apr 23 13:34:16 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Tue, 23 Apr 2024 13:34:16 +0000 Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> Message-ID: So, a further thing to keep in mind is that currently, adding fields to records is not even source compatible to begin with. For example, if we have record Point(int x, int y) { } And a client uses it in a pattern match: case Point(int x, int y): And then we add an `int z` component, the client will break. (When we are able to declare deconstruction patterns, such a migration could include an XY pattern as well as constructor, but we are not there yet.) I think your mail rests on the assumption that it should be fine to modify records willy-nilly and expect compatibility without a recompile-the-world, but I think this is a questionable assumption. Records will likely have features that ordinary classes do not yet have access to for a while, making such changes risky. On Apr 20, 2024, at 5:49 PM, Attila Kelemen > wrote: I have a backward compatibility concern about this JEP. Consider that I have the following record: `record MyRecord(int x, int y) { }` One day I realize that I need that 3rd property which I want to add in a backward compatible way, which I will do the following way: ``` record MyRecord(int x, int y, int z) { public MyRecord(int x, int y) { this(x, y, 0); } } ``` As far as I understand, this will still remain binary compatible. However, if I didn't miss anything, then this JEP makes it non-source compatible, because someone might wrote the following code: ``` var obj1 = new MyRecord(1, 2); int z = 26; var obj2 = obj1 with { y = z; } ``` If this code is compiled again, then it will compile without error, but while in the first version `obj2.y == 26`, now `obj2.y == 0`. This seems rather nasty to me because I was once bitten by this in Gradle (I can't recall if it was Groovy or Kotlin, but it doesn't really matter), where this is a threat, and you have to be very careful adding a new property in plugin extensions with a too generic name. Even though Gradle scripts are far less prone to this, since those scripts are usually a lot less complicated than normal code. I saw in the JEP that on the left hand side of the assignment this issue can't happen, but as far as I can see the above problem is not prevented. My proposal would be to, instead of shadowing variables, raise a compile time error when the property name would shadow another variable. Though that still leaves the above example backward incompatible, but at least I would be notified of it by the compiler, instead of the compiler silently creating a bug. Another solution would be that the shadowing is done in the opposite order, and the `int z = 26;` shadows the record property (with a potential warning). In this case it would be even source compatible, if I didn't miss something. Attila Mark Reinhold > ezt ?rta (id?pont: 2024. febr. 28., Sze, 21:04): https://openjdk.org/jeps/468 Summary: Enhance the Java language with derived creation for records. Records are immutable objects, so developers frequently create new records from old records to model new data. Derived creation streamlines code by deriving a new record from an existing record, specifying only the components that are different. This is a preview language feature. - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gavin.bierman at oracle.com Tue Apr 23 13:39:48 2024 From: gavin.bierman at oracle.com (Gavin Bierman) Date: Tue, 23 Apr 2024 13:39:48 +0000 Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> Message-ID: <362C57BA-45DE-4175-B506-84379F1E4DF6@oracle.com> Hi Atilla, Further to Brian?s reply, I have updated the JEP to list this phenomenon in a ?Risks and Assumptions? section. Many thanks, Gavin On 20 Apr 2024, at 17:49, Attila Kelemen wrote: I have a backward compatibility concern about this JEP. Consider that I have the following record: `record MyRecord(int x, int y) { }` One day I realize that I need that 3rd property which I want to add in a backward compatible way, which I will do the following way: ``` record MyRecord(int x, int y, int z) { public MyRecord(int x, int y) { this(x, y, 0); } } ``` As far as I understand, this will still remain binary compatible. However, if I didn't miss anything, then this JEP makes it non-source compatible, because someone might wrote the following code: ``` var obj1 = new MyRecord(1, 2); int z = 26; var obj2 = obj1 with { y = z; } ``` If this code is compiled again, then it will compile without error, but while in the first version `obj2.y == 26`, now `obj2.y == 0`. This seems rather nasty to me because I was once bitten by this in Gradle (I can't recall if it was Groovy or Kotlin, but it doesn't really matter), where this is a threat, and you have to be very careful adding a new property in plugin extensions with a too generic name. Even though Gradle scripts are far less prone to this, since those scripts are usually a lot less complicated than normal code. I saw in the JEP that on the left hand side of the assignment this issue can't happen, but as far as I can see the above problem is not prevented. My proposal would be to, instead of shadowing variables, raise a compile time error when the property name would shadow another variable. Though that still leaves the above example backward incompatible, but at least I would be notified of it by the compiler, instead of the compiler silently creating a bug. Another solution would be that the shadowing is done in the opposite order, and the `int z = 26;` shadows the record property (with a potential warning). In this case it would be even source compatible, if I didn't miss something. Attila Mark Reinhold > ezt ?rta (id?pont: 2024. febr. 28., Sze, 21:04): https://openjdk.org/jeps/468 Summary: Enhance the Java language with derived creation for records. Records are immutable objects, so developers frequently create new records from old records to model new data. Derived creation streamlines code by deriving a new record from an existing record, specifying only the components that are different. This is a preview language feature. - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotan.olexandr at gmail.com Tue Apr 23 14:34:56 2024 From: rotan.olexandr at gmail.com (=?UTF-8?B?0IbQny0yNCDQntC70LXQutGB0LDQvdC00YAg0KDQvtGC0LDQvdGM?=) Date: Tue, 23 Apr 2024 17:34:56 +0300 Subject: Extension methods Message-ID: Recently I wrote an email to compiler-dev about my proposal (and implementation) of extension methods. One of the users forwarded me here saying that my proposal might be considered here. Below I will duplicate the mail I have sent to compiler dev. Hope for your feedback. Dear Java Development Team, I hope this email finds you all in good spirits. I am writing to propose the integration of extension methods into the Java programming language, a feature that I believe holds considerable promise in enhancing code readability and maintainability. Extension methods offer a means to extend the functionality of existing classes in a manner that aligns with Java's principles of static typing and object-oriented design. The proposed syntax, exemplified as follows: public static void extensionMethod(extends String s) { ... } adheres to established conventions while providing a concise and intuitive means of extending class behavior. Notably, the use of the `extends` keyword preceding the type parameter clearly denotes the class to be extended, while the method itself is declared as a static member of a class. I wish to emphasize several advantages of extension methods over traditional utility functions. Firstly, extension methods offer a more cohesive approach to code organization by associating functionality directly with the class it extends. This promotes code clarity and reduces cognitive overhead for developers, particularly when working with complex codebases. Secondly, extension methods enhance code discoverability and usability by integrating seamlessly into the class they extend. This integration allows developers to leverage IDE features such as auto-completion and documentation tooltips, thereby facilitating more efficient code exploration and utilization. Lastly, extension methods promote code reusability without the need for subclassing or inheritance, thereby mitigating the risks associated with tight coupling and inheritance hierarchies. This modularity encourages a more flexible and adaptable codebase, conducive to long-term maintainability and scalability. In light of these benefits, I believe that the integration of extension methods into Java would represent a significant step forward for the language, aligning it more closely with modern programming paradigms while retaining its core strengths. I am eager to discuss this proposal further and collaborate with you all on its implementation. Your insights and feedback would be invaluable in shaping the future direction of Java development. Thank you for considering this proposal. I look forward to our discussion. The draft implementation can be found in the following branch of the repository: https://github.com/Evemose/jdk/tree/extension-methods. I am new to Java compiler development, so any tips or remarks about what I have done in the wrong way or in the wrong place. I will add complete test coverage a bit later, but for now, there is a link to the archive on my google drive, which contains built in jdk for windows x86-64. If someone is willing to participate in testing as a user, I would appreciate any help. Best regards PS: Note about internal implementation: it introduces a new flag - EXTENSION, that is equal to 1L<<32. It seems like it takes the last vacant bit in a long value type that has not been taken by flags. Not sure what the compiler development community should do about this, but it feels like it could be an obstacle to new features that might be introduced later. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kasperni at gmail.com Tue Apr 23 14:49:05 2024 From: kasperni at gmail.com (Kasper Nielsen) Date: Tue, 23 Apr 2024 15:49:05 +0100 Subject: Extension methods In-Reply-To: References: Message-ID: Hi Rotan, If you search the net, you should be able to find a number of resources for why extension methods will not be added to Java. Here are some https://stackoverflow.com/questions/29466427/what-was-the-design-consideration-of-not-allowing-use-site-injection-of-extensio/29494337#29494337 https://www.youtube.com/watch?v=qKeMB7OoGJk&t=1855s /Kasper On Tue, 23 Apr 2024 at 15:35, ??-24 ????????? ?????? wrote: > > Recently I wrote an email to compiler-dev about my proposal (and implementation) of extension methods. One of the users forwarded me here saying that my proposal might be considered here. Below I will duplicate the mail I have sent to compiler dev. Hope for your feedback. > > Dear Java Development Team, > > I hope this email finds you all in good spirits. I am writing to propose the integration of extension methods into the Java programming language, a feature that I believe holds considerable promise in enhancing code readability and maintainability. > > Extension methods offer a means to extend the functionality of existing classes in a manner that aligns with Java's principles of static typing and object-oriented design. The proposed syntax, exemplified as follows: > > public static void extensionMethod(extends String s) { ... } > > adheres to established conventions while providing a concise and intuitive means of extending class behavior. Notably, the use of the `extends` keyword preceding the type parameter clearly denotes the class to be extended, while the method itself is declared as a static member of a class. > > I wish to emphasize several advantages of extension methods over traditional utility functions. Firstly, extension methods offer a more cohesive approach to code organization by associating functionality directly with the class it extends. This promotes code clarity and reduces cognitive overhead for developers, particularly when working with complex codebases. > > Secondly, extension methods enhance code discoverability and usability by integrating seamlessly into the class they extend. This integration allows developers to leverage IDE features such as auto-completion and documentation tooltips, thereby facilitating more efficient code exploration and utilization. > > Lastly, extension methods promote code reusability without the need for subclassing or inheritance, thereby mitigating the risks associated with tight coupling and inheritance hierarchies. This modularity encourages a more flexible and adaptable codebase, conducive to long-term maintainability and scalability. > > In light of these benefits, I believe that the integration of extension methods into Java would represent a significant step forward for the language, aligning it more closely with modern programming paradigms while retaining its core strengths. > > I am eager to discuss this proposal further and collaborate with you all on its implementation. Your insights and feedback would be invaluable in shaping the future direction of Java development. > > Thank you for considering this proposal. I look forward to our discussion. > > The draft implementation can be found in the following branch of the repository: https://github.com/Evemose/jdk/tree/extension-methods. I am new to Java compiler development, so any tips or remarks about what I have done in the wrong way or in the wrong place. I will add complete test coverage a bit later, but for now, there is a link to the archive on my google drive, which contains built in jdk for windows x86-64. If someone is willing to participate in testing as a user, I would appreciate any help. > > Best regards > > PS: Note about internal implementation: it introduces a new flag - EXTENSION, that is equal to 1L<<32. It seems like it takes the last vacant bit in a long value type that has not been taken by flags. Not sure what the compiler development community should do about this, but it feels like it could be an obstacle to new features that might be introduced later. From rotan.olexandr at gmail.com Tue Apr 23 14:57:38 2024 From: rotan.olexandr at gmail.com (=?UTF-8?B?0IbQny0yNCDQntC70LXQutGB0LDQvdC00YAg0KDQvtGC0LDQvdGM?=) Date: Tue, 23 Apr 2024 17:57:38 +0300 Subject: Extension methods In-Reply-To: References: Message-ID: Yeah, I've heard that I have addressed most of the points in the following letter in compiler dev. I will copy my mail below, but if you don't really want to consider this, it's really upsetting. Community craves for it and asks all the time, and I'm sure If it was proposed as JEP it will be gladly accepted by Java devs. As for me, that's the most important thing, not decisions that hand been made a long time ago in the past. I am pretty sure the point I will provide as advantages has already been brought up here numerous times, but I will take some time and would appreciate it if someone from API designers, who once rejected this proposal, would spare some time to discuss this topic with me. 1. "Poor reflective discoverability" essentially means extension methods are not accessible when inspecting class members. That is not some inherent issue of this feature, this is just the way it should be. I'm not really sure if there is someone who has ever been hurt by this, besides maybe some parser-based solutions, but let's be honest, this is a: solvable, b: ridiculously exotic to consider. 2. Documentation accessibility is a strange point for me to be fair. Every IDE nowadays is capable of fetching the right documentation, as well as explicitly mentioning where the method comes from, as it is done in C#, kotlin, swift and many other languages. I don't think anyone has ever heard complaints about poor documentation of LinQ. Unless someone is writing in notepad, this is poorly applicable. 3. Not overridable. Should they be? I don't think there is a way to achieve some kind of "polymorphic" extensions, and I don't think there should be: extension methods should provide polymorphic target handling, floow LSP etc., not the other way around. 4. C# extension methods indeed have some very serious drawbacks compared to Java's, but doesn't this also go the other way around? Canonical utility functions breach object-oriented code-style, making users write procedural code like Utils.doSome(obj, params) instead of obj.doSome(params). Its common issue users just aren't aware of the existence of certain utility classes and end up with nonoptimal code. Code without extension methods is always much more verbose, if the API is supposed to be fluent developer could end up with deep nested invocations. This brings numerous problems, such as majorly reduced readability, as the utility methods wrap each other and the developer reads the processing pipeline "from the end". Also reduced readability always means increased change of errors. Writing code using utility methods instead of extensions is just slower, developers have to waste more time writing a statement than it would be with extension methods. Some other advantages I will list below, after this list is ended. 5. One of the answers from the first thread you provided ( https://stackoverflow.com/a/29494337) states that omitting extension methods is a "philosophical choice", as API developers should define the API. I have to strongly disagree with that. Extension methods are NOT part of the API, they are EXTENSION to it. It does not breach encapsulation as it can't access any internal members of API classes. Extensions have to be imported explicitly (not the containing class), so they are explicitly mentioned in the imports list. Also, are utility methods also breaching this rule then? The only real difference I see is differences in notation, and extension methods are clearly much more concise. Now moving on to some of my personal points. I am also developing some core libs APIs, and sometimes extension methods are just craving to be used. Modifying widely-used interfaces is always painful, but that's the only way to provide a concise way to communicate with existing APIs. Moreover, Java is an old language and some internal implementations of APIs, like Stream implementations, are so juncted that adding something new to these classes directly becomes a spec-ops task. Some APIs, like Gatherer API or Collector API, arise just from this simple necessity to introduce new behaviour without modifying extended class itself. Extension methods are de-facto standard for modern languages: C#, Kotlin, Swift, Rust and any other modern language provide this option. JS also has its own unique way of extending APIs. The only modern widely-used language that does not have extension methods is Python, but that only applies to built-ins, custom classes could be extended as well. When Java refuses to introduce this feature, I suffer major damage in the eyes of potential switchers from ANY other language available right now, which is retrograde and, as for me and any other supporter of Java, really upsetting. Introduction of extension will not invalidate previously written code: if one wants, they can still use utilities as earlier and pretend that nothing happened. However, for other, significant, if not to say major, part of the community, this would be a valuable addition..Virtually every Java utility library: lombok, manifold, xtend - provide extension method functionality, which clearly shows there is a demand for a feature. The extension methods are just syntax sugar, nothing more. They provide better developer experience, make code less verbose and more readable, which reduces chance of errors and helps to develop apps faster, while not affecting performance in any way, which is crucial in today's world and may be a major concurrent advantage. Also, last but not least, noone from Java developers teams will have to put efforts into implementation. I am willing to take one full feature development lifecycle, from drafts to testing and integration. Of course, final changes will need to be reviewed, but I don't think this will be an unbearable burden. Regarding implementation, these changes are really non-invasive, and are really unlikely to introduce any issues, it already passes all existing tests. I sincerely hope this letter will bring up this discussion once again, as I am open for dialogue, but I am standing my ground about numerous advantages this feature has. If that's possible, I may file a draft JEP and let the community vote for or against it, to see what Java users think about this proposal. Best regards, Hoping to receive some feedback Author words: then there were another two mails: 1. One thing I really don't like about extensions is when people say "you never know where it comes from". Firstly, if extension is really an extension and not just a utility method that has been made extension for fun (which is just bad code design and language shouldn't really assume that when designing features), its name should describe what it does just like regular instance method describes. Secondly, If you require more elaboration, any IDE is capable of fetching docs for extensions correctly nowadays, as well as pointing out scope where it comes from, which I guess should be taken into account. Regarding ambiguity, the common practice (which also is applied in my design) is to prioritize members over extensions, there is no ambiguity if behaviour is well defined.This "potentially" could sometimes result in source incompatibility with some third-party libraries, but as soon as this is not conflicting with anything from stdlib, that's not really our concern. Also, I think it's kind of exotic scenario, and may crash only with utilities from stdlib classes, which cuts off virtually all production code from the risk group.And as you mentioned earlier, this problem has already been present long enough, just not that widely. Syntax that you have provided, to be fair, smells a bit like PHP to me for some reason. I don't see why not to use dot notation along with static imports instead of lists. For me the statement you provided was not that easy to understand at a first glance, however that's still much better then traditional utilities. Also I guess worth mentioning that other languages like C# and kotlin and many other leave with the same dot notation invocation and I never really heard that updating language version was really a trouble, especially in kotlin, even though extension methods have been part of the language for a very long time already. 2. And one more remark: when updating Java version, if there are potential ambiguities is source files of some dependencies for new version, but it compiled fine for older, code will still work as it was, because my implementation relies on compile-time code transformation and therefore, already compiled code won't suddenly start invoking some other methods I really hope I could at least bring this discussion again as it is indeed really wanted feature by community! ??, 23 ???. 2024??. ? 17:56, ??-24 ????????? ?????? < rotan.olexandr at gmail.com>: > Yeah, I've heard that I have addressed most of the points in the following > letter in compiler dev. I will copy my mail below, but if you don't really > want to consider this, it's really upsetting. Community craves for it and > asks all the time, and I'm sure If it was proposed as JEP it will be gladly > accepted by Java devs. As for me, that's the most important thing, not > decisions that hand been made a long time ago in the past. > > I am pretty sure the point I will provide as advantages has already been > brought up here numerous times, but I will take some time and would > appreciate it if someone from API designers, who once rejected this > proposal, would spare some time to discuss this topic with me. > > 1. "Poor reflective discoverability" essentially means extension methods > are not accessible when inspecting class members. That is not some > inherent issue of this feature, this is just the way it should be. I'm > not really sure if there is someone who has ever been hurt by this, besides > maybe some parser-based solutions, but let's be honest, this is a: > solvable, b: ridiculously exotic to consider. > > 2. Documentation accessibility is a strange point for me to be fair. Every > IDE nowadays is capable of fetching the right documentation, as well as > explicitly mentioning where the method comes from, as it is done in C#, > kotlin, swift and many other languages. I don't think anyone has ever heard > complaints about poor documentation of LinQ. Unless someone is writing in > notepad, this is poorly applicable. > > 3. Not overridable. Should they be? I don't think there is a way to > achieve some kind of "polymorphic" extensions, and I don't think there > should be: extension methods should provide polymorphic target handling, > floow LSP etc., not the other way around. > > 4. C# extension methods indeed have some very serious drawbacks compared > to Java's, but doesn't this also go the other way around? Canonical utility > functions breach object-oriented code-style, making users write procedural > code like Utils.doSome(obj, params) instead of obj.doSome(params). Its > common issue users just aren't aware of the existence of certain utility > classes and end up with nonoptimal code. > Code without extension methods is always much more verbose, if the API is > supposed to be fluent developer could end up with deep nested invocations. > This brings numerous problems, such as majorly reduced readability, as the > utility methods wrap each other and the developer reads the processing > pipeline "from the end". Also reduced readability always means increased > change of errors. > Writing code using utility methods instead of extensions is just slower, > developers have to waste more time writing a statement than it would be > with extension methods. > Some other advantages I will list below, after this list is ended. > > 5. One of the answers from the first thread you provided ( > https://stackoverflow.com/a/29494337) states that omitting extension > methods is a "philosophical choice", as API developers should define > the API. I have to strongly disagree with that. Extension methods are NOT > part of the API, they are EXTENSION to it. It does not breach > encapsulation as it can't access any internal members of API classes. > Extensions have to be imported explicitly (not the containing class), so > they are explicitly mentioned in the imports list. Also, are utility > methods also breaching this rule then? The only real difference I see is > differences in notation, and extension methods are clearly much more > concise. > > Now moving on to some of my personal points. I am also developing some > core libs APIs, and sometimes extension methods are just craving to be > used. Modifying widely-used interfaces is always painful, but that's the > only way to provide a concise way to communicate with existing APIs. > Moreover, Java is an old language and some internal implementations of > APIs, like Stream implementations, are so juncted that adding something new > to these classes directly becomes a spec-ops task. Some APIs, like Gatherer > API or Collector API, arise just from this simple necessity to introduce > new behaviour without modifying extended class itself. > > Extension methods are de-facto standard for modern languages: C#, Kotlin, > Swift, Rust and any other modern language provide this option. JS also has > its own unique way of extending APIs. The only modern widely-used language > that does not have extension methods is Python, but that only applies to > built-ins, custom classes could be extended as well. When Java refuses to > introduce this feature, I suffer major damage in the eyes of potential > switchers from ANY other language available right now, which is retrograde > and, as for me and any other supporter of Java, really upsetting. > > Introduction of extension will not invalidate previously written code: if > one wants, they can still use utilities as earlier and pretend that nothing > happened. However, for other, significant, if not to say major, part of the > community, this would be a valuable addition..Virtually every Java utility > library: lombok, manifold, xtend - provide extension method functionality, > which clearly shows there is a demand for a feature. > > The extension methods are just syntax sugar, nothing more. They provide > better developer experience, make code less verbose and more readable, > which reduces chance of errors and helps to develop apps faster, while not > affecting performance in any way, which is crucial in today's world and may > be a major concurrent advantage. > > Also, last but not least, noone from Java developers teams will have to > put efforts into implementation. I am willing to take one full feature > development lifecycle, from drafts to testing and integration. Of course, > final changes will need to be reviewed, but I don't think this will be an > unbearable burden. Regarding implementation, these changes are really > non-invasive, and are really unlikely to introduce any issues, it already > passes all existing tests. > > I sincerely hope this letter will bring up this discussion once again, as > I am open for dialogue, but I am standing my ground about numerous > advantages this feature has. If that's possible, I may file a draft JEP and > let the community vote for or against it, to see what Java users think > about this proposal. > > Best regards, > Hoping to receive some feedback > > Author words: then there were another two mails: > > 1. > One thing I really don't like about extensions is when people say "you > never know where it comes from". Firstly, if extension is really an > extension and not just a utility method that has been made extension for > fun (which is just bad code design and language shouldn't really assume > that when designing features), its name should describe what it does just > like regular instance method describes. Secondly, If you require more > elaboration, any IDE is capable of fetching docs for extensions correctly > nowadays, as well as pointing out scope where it comes from, which I guess > should be taken into account. > > Regarding ambiguity, the common practice (which also is applied in my > design) is to prioritize members over extensions, there is no ambiguity if > behaviour is well defined.This "potentially" could sometimes result in > source incompatibility with some third-party libraries, but as soon as this > is not conflicting with anything from stdlib, that's not really our > concern. Also, I think it's kind of exotic scenario, and may crash only > with utilities from stdlib classes, which cuts off virtually all production > code from the risk group.And as you mentioned earlier, this problem has > already been present long enough, just not that widely. > > Syntax that you have provided, to be fair, smells a bit like PHP to me for > some reason. I don't see why not to use dot notation along with static > imports instead of lists. For me the statement you provided was not that > easy to understand at a first glance, however that's still much better > then traditional utilities. > > Also I guess worth mentioning that other languages like C# and kotlin and > many other leave with the same dot notation invocation and I never really > heard that updating language version was really a trouble, especially in > kotlin, even though extension methods have been part of the language for a > very long time already. > > 2. > And one more remark: when updating Java version, if there are potential > ambiguities is source files of some dependencies for new version, but it > compiled fine for older, code will still work as it was, because my > implementation relies on compile-time code transformation and therefore, > already compiled code won't suddenly start invoking some other methods > > I really hope I could at least bring this discussion again as it is indeed > really wanted feature by community! > > ??, 23 ???. 2024??. ? 17:49, Kasper Nielsen : > >> Hi Rotan, >> >> If you search the net, you should be able to find a number of >> resources for why extension methods will not be added to Java. Here >> are some >> >> >> https://stackoverflow.com/questions/29466427/what-was-the-design-consideration-of-not-allowing-use-site-injection-of-extensio/29494337#29494337 >> https://www.youtube.com/watch?v=qKeMB7OoGJk&t=1855s >> >> >> /Kasper >> >> On Tue, 23 Apr 2024 at 15:35, ??-24 ????????? ?????? >> wrote: >> > >> > Recently I wrote an email to compiler-dev about my proposal (and >> implementation) of extension methods. One of the users forwarded me here >> saying that my proposal might be considered here. Below I will duplicate >> the mail I have sent to compiler dev. Hope for your feedback. >> > >> > Dear Java Development Team, >> > >> > I hope this email finds you all in good spirits. I am writing to >> propose the integration of extension methods into the Java programming >> language, a feature that I believe holds considerable promise in enhancing >> code readability and maintainability. >> > >> > Extension methods offer a means to extend the functionality of existing >> classes in a manner that aligns with Java's principles of static typing and >> object-oriented design. The proposed syntax, exemplified as follows: >> > >> > public static void extensionMethod(extends String s) { ... } >> > >> > adheres to established conventions while providing a concise and >> intuitive means of extending class behavior. Notably, the use of the >> `extends` keyword preceding the type parameter clearly denotes the class to >> be extended, while the method itself is declared as a static member of a >> class. >> > >> > I wish to emphasize several advantages of extension methods over >> traditional utility functions. Firstly, extension methods offer a more >> cohesive approach to code organization by associating functionality >> directly with the class it extends. This promotes code clarity and reduces >> cognitive overhead for developers, particularly when working with complex >> codebases. >> > >> > Secondly, extension methods enhance code discoverability and usability >> by integrating seamlessly into the class they extend. This integration >> allows developers to leverage IDE features such as auto-completion and >> documentation tooltips, thereby facilitating more efficient code >> exploration and utilization. >> > >> > Lastly, extension methods promote code reusability without the need for >> subclassing or inheritance, thereby mitigating the risks associated with >> tight coupling and inheritance hierarchies. This modularity encourages a >> more flexible and adaptable codebase, conducive to long-term >> maintainability and scalability. >> > >> > In light of these benefits, I believe that the integration of extension >> methods into Java would represent a significant step forward for the >> language, aligning it more closely with modern programming paradigms while >> retaining its core strengths. >> > >> > I am eager to discuss this proposal further and collaborate with you >> all on its implementation. Your insights and feedback would be invaluable >> in shaping the future direction of Java development. >> > >> > Thank you for considering this proposal. I look forward to our >> discussion. >> > >> > The draft implementation can be found in the following branch of the >> repository: https://github.com/Evemose/jdk/tree/extension-methods. I am >> new to Java compiler development, so any tips or remarks about what I have >> done in the wrong way or in the wrong place. I will add complete test >> coverage a bit later, but for now, there is a link to the archive on my >> google drive, which contains built in jdk for windows x86-64. If someone is >> willing to participate in testing as a user, I would appreciate any help. >> > >> > Best regards >> > >> > PS: Note about internal implementation: it introduces a new flag - >> EXTENSION, that is equal to 1L<<32. It seems like it takes the last vacant >> bit in a long value type that has not been taken by flags. Not sure what >> the compiler development community should do about this, but it feels like >> it could be an obstacle to new features that might be introduced later. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotan.olexandr at gmail.com Tue Apr 23 15:23:21 2024 From: rotan.olexandr at gmail.com (=?UTF-8?B?0IbQny0yNCDQntC70LXQutGB0LDQvdC00YAg0KDQvtGC0LDQvdGM?=) Date: Tue, 23 Apr 2024 18:23:21 +0300 Subject: Extension methods In-Reply-To: References: Message-ID: Also have to disagree with the point from the video you have attached that one of the cons is that code means different in different files. Isn't it just what import scope is? Like of course you can copy code to another file, not synchronize imports and it wont work or will work differently, that's how java always has been. Static methods is just a syntactic sugar for static imports and nothing more ??, 23 ???. 2024??. ? 17:58, ??-24 ????????? ?????? < rotan.olexandr at gmail.com>: > Sorry for duplicate, I accidentally sent first message to you privately > > ??, 23 ???. 2024??. ? 17:57, ??-24 ????????? ?????? < > rotan.olexandr at gmail.com>: > >> Yeah, I've heard that I have addressed most of the points in the >> following letter in compiler dev. I will copy my mail below, but if you >> don't really want to consider this, it's really upsetting. Community craves >> for it and asks all the time, and I'm sure If it was proposed as JEP it >> will be gladly accepted by Java devs. As for me, that's the most important >> thing, not decisions that hand been made a long time ago in the past. >> >> I am pretty sure the point I will provide as advantages has already been >> brought up here numerous times, but I will take some time and would >> appreciate it if someone from API designers, who once rejected this >> proposal, would spare some time to discuss this topic with me. >> >> 1. "Poor reflective discoverability" essentially means extension methods >> are not accessible when inspecting class members. That is not some >> inherent issue of this feature, this is just the way it should be. I'm >> not really sure if there is someone who has ever been hurt by this, besides >> maybe some parser-based solutions, but let's be honest, this is a: >> solvable, b: ridiculously exotic to consider. >> >> 2. Documentation accessibility is a strange point for me to be fair. >> Every IDE nowadays is capable of fetching the right documentation, as well >> as explicitly mentioning where the method comes from, as it is done in C#, >> kotlin, swift and many other languages. I don't think anyone has ever heard >> complaints about poor documentation of LinQ. Unless someone is writing in >> notepad, this is poorly applicable. >> >> 3. Not overridable. Should they be? I don't think there is a way to >> achieve some kind of "polymorphic" extensions, and I don't think there >> should be: extension methods should provide polymorphic target handling, >> floow LSP etc., not the other way around. >> >> 4. C# extension methods indeed have some very serious drawbacks compared >> to Java's, but doesn't this also go the other way around? Canonical utility >> functions breach object-oriented code-style, making users write procedural >> code like Utils.doSome(obj, params) instead of obj.doSome(params). Its >> common issue users just aren't aware of the existence of certain utility >> classes and end up with nonoptimal code. >> Code without extension methods is always much more verbose, if the API is >> supposed to be fluent developer could end up with deep nested invocations. >> This brings numerous problems, such as majorly reduced readability, as the >> utility methods wrap each other and the developer reads the processing >> pipeline "from the end". Also reduced readability always means increased >> change of errors. >> Writing code using utility methods instead of extensions is just slower, >> developers have to waste more time writing a statement than it would be >> with extension methods. >> Some other advantages I will list below, after this list is ended. >> >> 5. One of the answers from the first thread you provided ( >> https://stackoverflow.com/a/29494337) states that omitting extension >> methods is a "philosophical choice", as API developers should define >> the API. I have to strongly disagree with that. Extension methods are NOT >> part of the API, they are EXTENSION to it. It does not breach >> encapsulation as it can't access any internal members of API classes. >> Extensions have to be imported explicitly (not the containing class), so >> they are explicitly mentioned in the imports list. Also, are utility >> methods also breaching this rule then? The only real difference I see is >> differences in notation, and extension methods are clearly much more >> concise. >> >> Now moving on to some of my personal points. I am also developing some >> core libs APIs, and sometimes extension methods are just craving to be >> used. Modifying widely-used interfaces is always painful, but that's the >> only way to provide a concise way to communicate with existing APIs. >> Moreover, Java is an old language and some internal implementations of >> APIs, like Stream implementations, are so juncted that adding something new >> to these classes directly becomes a spec-ops task. Some APIs, like Gatherer >> API or Collector API, arise just from this simple necessity to introduce >> new behaviour without modifying extended class itself. >> >> Extension methods are de-facto standard for modern languages: C#, Kotlin, >> Swift, Rust and any other modern language provide this option. JS also has >> its own unique way of extending APIs. The only modern widely-used language >> that does not have extension methods is Python, but that only applies to >> built-ins, custom classes could be extended as well. When Java refuses to >> introduce this feature, I suffer major damage in the eyes of potential >> switchers from ANY other language available right now, which is retrograde >> and, as for me and any other supporter of Java, really upsetting. >> >> Introduction of extension will not invalidate previously written code: if >> one wants, they can still use utilities as earlier and pretend that nothing >> happened. However, for other, significant, if not to say major, part of the >> community, this would be a valuable addition..Virtually every Java utility >> library: lombok, manifold, xtend - provide extension method functionality, >> which clearly shows there is a demand for a feature. >> >> The extension methods are just syntax sugar, nothing more. They provide >> better developer experience, make code less verbose and more readable, >> which reduces chance of errors and helps to develop apps faster, while not >> affecting performance in any way, which is crucial in today's world and may >> be a major concurrent advantage. >> >> Also, last but not least, noone from Java developers teams will have to >> put efforts into implementation. I am willing to take one full feature >> development lifecycle, from drafts to testing and integration. Of course, >> final changes will need to be reviewed, but I don't think this will be an >> unbearable burden. Regarding implementation, these changes are really >> non-invasive, and are really unlikely to introduce any issues, it already >> passes all existing tests. >> >> I sincerely hope this letter will bring up this discussion once again, as >> I am open for dialogue, but I am standing my ground about numerous >> advantages this feature has. If that's possible, I may file a draft JEP and >> let the community vote for or against it, to see what Java users think >> about this proposal. >> >> Best regards, >> Hoping to receive some feedback >> >> Author words: then there were another two mails: >> >> 1. >> One thing I really don't like about extensions is when people say "you >> never know where it comes from". Firstly, if extension is really an >> extension and not just a utility method that has been made extension for >> fun (which is just bad code design and language shouldn't really assume >> that when designing features), its name should describe what it does just >> like regular instance method describes. Secondly, If you require more >> elaboration, any IDE is capable of fetching docs for extensions correctly >> nowadays, as well as pointing out scope where it comes from, which I guess >> should be taken into account. >> >> Regarding ambiguity, the common practice (which also is applied in my >> design) is to prioritize members over extensions, there is no ambiguity if >> behaviour is well defined.This "potentially" could sometimes result in >> source incompatibility with some third-party libraries, but as soon as this >> is not conflicting with anything from stdlib, that's not really our >> concern. Also, I think it's kind of exotic scenario, and may crash only >> with utilities from stdlib classes, which cuts off virtually all production >> code from the risk group.And as you mentioned earlier, this problem has >> already been present long enough, just not that widely. >> >> Syntax that you have provided, to be fair, smells a bit like PHP to me >> for some reason. I don't see why not to use dot notation along with static >> imports instead of lists. For me the statement you provided was not that >> easy to understand at a first glance, however that's still much better >> then traditional utilities. >> >> Also I guess worth mentioning that other languages like C# and kotlin and >> many other leave with the same dot notation invocation and I never really >> heard that updating language version was really a trouble, especially in >> kotlin, even though extension methods have been part of the language for a >> very long time already. >> >> 2. >> And one more remark: when updating Java version, if there are potential >> ambiguities is source files of some dependencies for new version, but it >> compiled fine for older, code will still work as it was, because my >> implementation relies on compile-time code transformation and therefore, >> already compiled code won't suddenly start invoking some other methods >> >> I really hope I could at least bring this discussion again as it is >> indeed really wanted feature by community! >> >> ??, 23 ???. 2024??. ? 17:56, ??-24 ????????? ?????? < >> rotan.olexandr at gmail.com>: >> >>> Yeah, I've heard that I have addressed most of the points in the >>> following letter in compiler dev. I will copy my mail below, but if you >>> don't really want to consider this, it's really upsetting. Community craves >>> for it and asks all the time, and I'm sure If it was proposed as JEP it >>> will be gladly accepted by Java devs. As for me, that's the most important >>> thing, not decisions that hand been made a long time ago in the past. >>> >>> I am pretty sure the point I will provide as advantages has already been >>> brought up here numerous times, but I will take some time and would >>> appreciate it if someone from API designers, who once rejected this >>> proposal, would spare some time to discuss this topic with me. >>> >>> 1. "Poor reflective discoverability" essentially means extension methods >>> are not accessible when inspecting class members. That is not some >>> inherent issue of this feature, this is just the way it should be. I'm >>> not really sure if there is someone who has ever been hurt by this, besides >>> maybe some parser-based solutions, but let's be honest, this is a: >>> solvable, b: ridiculously exotic to consider. >>> >>> 2. Documentation accessibility is a strange point for me to be fair. >>> Every IDE nowadays is capable of fetching the right documentation, as well >>> as explicitly mentioning where the method comes from, as it is done in C#, >>> kotlin, swift and many other languages. I don't think anyone has ever heard >>> complaints about poor documentation of LinQ. Unless someone is writing in >>> notepad, this is poorly applicable. >>> >>> 3. Not overridable. Should they be? I don't think there is a way to >>> achieve some kind of "polymorphic" extensions, and I don't think there >>> should be: extension methods should provide polymorphic target handling, >>> floow LSP etc., not the other way around. >>> >>> 4. C# extension methods indeed have some very serious drawbacks compared >>> to Java's, but doesn't this also go the other way around? Canonical utility >>> functions breach object-oriented code-style, making users write procedural >>> code like Utils.doSome(obj, params) instead of obj.doSome(params). Its >>> common issue users just aren't aware of the existence of certain utility >>> classes and end up with nonoptimal code. >>> Code without extension methods is always much more verbose, if the API >>> is supposed to be fluent developer could end up with deep nested >>> invocations. This brings numerous problems, such as majorly reduced >>> readability, as the utility methods wrap each other and the developer reads >>> the processing pipeline "from the end". Also reduced readability always >>> means increased change of errors. >>> Writing code using utility methods instead of extensions is just slower, >>> developers have to waste more time writing a statement than it would be >>> with extension methods. >>> Some other advantages I will list below, after this list is ended. >>> >>> 5. One of the answers from the first thread you provided ( >>> https://stackoverflow.com/a/29494337) states that omitting extension >>> methods is a "philosophical choice", as API developers should define >>> the API. I have to strongly disagree with that. Extension methods are NOT >>> part of the API, they are EXTENSION to it. It does not breach >>> encapsulation as it can't access any internal members of API classes. >>> Extensions have to be imported explicitly (not the containing class), so >>> they are explicitly mentioned in the imports list. Also, are utility >>> methods also breaching this rule then? The only real difference I see is >>> differences in notation, and extension methods are clearly much more >>> concise. >>> >>> Now moving on to some of my personal points. I am also developing some >>> core libs APIs, and sometimes extension methods are just craving to be >>> used. Modifying widely-used interfaces is always painful, but that's the >>> only way to provide a concise way to communicate with existing APIs. >>> Moreover, Java is an old language and some internal implementations of >>> APIs, like Stream implementations, are so juncted that adding something new >>> to these classes directly becomes a spec-ops task. Some APIs, like Gatherer >>> API or Collector API, arise just from this simple necessity to introduce >>> new behaviour without modifying extended class itself. >>> >>> Extension methods are de-facto standard for modern languages: C#, >>> Kotlin, Swift, Rust and any other modern language provide this option. JS >>> also has its own unique way of extending APIs. The only modern widely-used >>> language that does not have extension methods is Python, but that only >>> applies to built-ins, custom classes could be extended as well. When Java >>> refuses to introduce this feature, I suffer major damage in the eyes of >>> potential switchers from ANY other language available right now, which is >>> retrograde and, as for me and any other supporter of Java, really upsetting. >>> >>> Introduction of extension will not invalidate previously written code: >>> if one wants, they can still use utilities as earlier and pretend that >>> nothing happened. However, for other, significant, if not to say major, >>> part of the community, this would be a valuable addition..Virtually every >>> Java utility library: lombok, manifold, xtend - provide extension method >>> functionality, which clearly shows there is a demand for a feature. >>> >>> The extension methods are just syntax sugar, nothing more. They provide >>> better developer experience, make code less verbose and more readable, >>> which reduces chance of errors and helps to develop apps faster, while not >>> affecting performance in any way, which is crucial in today's world and may >>> be a major concurrent advantage. >>> >>> Also, last but not least, noone from Java developers teams will have to >>> put efforts into implementation. I am willing to take one full feature >>> development lifecycle, from drafts to testing and integration. Of course, >>> final changes will need to be reviewed, but I don't think this will be an >>> unbearable burden. Regarding implementation, these changes are really >>> non-invasive, and are really unlikely to introduce any issues, it already >>> passes all existing tests. >>> >>> I sincerely hope this letter will bring up this discussion once again, >>> as I am open for dialogue, but I am standing my ground about numerous >>> advantages this feature has. If that's possible, I may file a draft JEP and >>> let the community vote for or against it, to see what Java users think >>> about this proposal. >>> >>> Best regards, >>> Hoping to receive some feedback >>> >>> Author words: then there were another two mails: >>> >>> 1. >>> One thing I really don't like about extensions is when people say "you >>> never know where it comes from". Firstly, if extension is really an >>> extension and not just a utility method that has been made extension for >>> fun (which is just bad code design and language shouldn't really assume >>> that when designing features), its name should describe what it does just >>> like regular instance method describes. Secondly, If you require more >>> elaboration, any IDE is capable of fetching docs for extensions correctly >>> nowadays, as well as pointing out scope where it comes from, which I guess >>> should be taken into account. >>> >>> Regarding ambiguity, the common practice (which also is applied in my >>> design) is to prioritize members over extensions, there is no ambiguity if >>> behaviour is well defined.This "potentially" could sometimes result in >>> source incompatibility with some third-party libraries, but as soon as this >>> is not conflicting with anything from stdlib, that's not really our >>> concern. Also, I think it's kind of exotic scenario, and may crash only >>> with utilities from stdlib classes, which cuts off virtually all production >>> code from the risk group.And as you mentioned earlier, this problem has >>> already been present long enough, just not that widely. >>> >>> Syntax that you have provided, to be fair, smells a bit like PHP to me >>> for some reason. I don't see why not to use dot notation along with static >>> imports instead of lists. For me the statement you provided was not that >>> easy to understand at a first glance, however that's still much better >>> then traditional utilities. >>> >>> Also I guess worth mentioning that other languages like C# and kotlin >>> and many other leave with the same dot notation invocation and I never >>> really heard that updating language version was really a trouble, >>> especially in kotlin, even though extension methods have been part of the >>> language for a very long time already. >>> >>> 2. >>> And one more remark: when updating Java version, if there are potential >>> ambiguities is source files of some dependencies for new version, but it >>> compiled fine for older, code will still work as it was, because my >>> implementation relies on compile-time code transformation and therefore, >>> already compiled code won't suddenly start invoking some other methods >>> >>> I really hope I could at least bring this discussion again as it is >>> indeed really wanted feature by community! >>> >>> ??, 23 ???. 2024??. ? 17:49, Kasper Nielsen : >>> >>>> Hi Rotan, >>>> >>>> If you search the net, you should be able to find a number of >>>> resources for why extension methods will not be added to Java. Here >>>> are some >>>> >>>> >>>> https://stackoverflow.com/questions/29466427/what-was-the-design-consideration-of-not-allowing-use-site-injection-of-extensio/29494337#29494337 >>>> https://www.youtube.com/watch?v=qKeMB7OoGJk&t=1855s >>>> >>>> >>>> /Kasper >>>> >>>> On Tue, 23 Apr 2024 at 15:35, ??-24 ????????? ?????? >>>> wrote: >>>> > >>>> > Recently I wrote an email to compiler-dev about my proposal (and >>>> implementation) of extension methods. One of the users forwarded me here >>>> saying that my proposal might be considered here. Below I will duplicate >>>> the mail I have sent to compiler dev. Hope for your feedback. >>>> > >>>> > Dear Java Development Team, >>>> > >>>> > I hope this email finds you all in good spirits. I am writing to >>>> propose the integration of extension methods into the Java programming >>>> language, a feature that I believe holds considerable promise in enhancing >>>> code readability and maintainability. >>>> > >>>> > Extension methods offer a means to extend the functionality of >>>> existing classes in a manner that aligns with Java's principles of static >>>> typing and object-oriented design. The proposed syntax, exemplified as >>>> follows: >>>> > >>>> > public static void extensionMethod(extends String s) { ... } >>>> > >>>> > adheres to established conventions while providing a concise and >>>> intuitive means of extending class behavior. Notably, the use of the >>>> `extends` keyword preceding the type parameter clearly denotes the class to >>>> be extended, while the method itself is declared as a static member of a >>>> class. >>>> > >>>> > I wish to emphasize several advantages of extension methods over >>>> traditional utility functions. Firstly, extension methods offer a more >>>> cohesive approach to code organization by associating functionality >>>> directly with the class it extends. This promotes code clarity and reduces >>>> cognitive overhead for developers, particularly when working with complex >>>> codebases. >>>> > >>>> > Secondly, extension methods enhance code discoverability and >>>> usability by integrating seamlessly into the class they extend. This >>>> integration allows developers to leverage IDE features such as >>>> auto-completion and documentation tooltips, thereby facilitating more >>>> efficient code exploration and utilization. >>>> > >>>> > Lastly, extension methods promote code reusability without the need >>>> for subclassing or inheritance, thereby mitigating the risks associated >>>> with tight coupling and inheritance hierarchies. This modularity encourages >>>> a more flexible and adaptable codebase, conducive to long-term >>>> maintainability and scalability. >>>> > >>>> > In light of these benefits, I believe that the integration of >>>> extension methods into Java would represent a significant step forward for >>>> the language, aligning it more closely with modern programming paradigms >>>> while retaining its core strengths. >>>> > >>>> > I am eager to discuss this proposal further and collaborate with you >>>> all on its implementation. Your insights and feedback would be invaluable >>>> in shaping the future direction of Java development. >>>> > >>>> > Thank you for considering this proposal. I look forward to our >>>> discussion. >>>> > >>>> > The draft implementation can be found in the following branch of the >>>> repository: https://github.com/Evemose/jdk/tree/extension-methods. I >>>> am new to Java compiler development, so any tips or remarks about what I >>>> have done in the wrong way or in the wrong place. I will add complete test >>>> coverage a bit later, but for now, there is a link to the archive on my >>>> google drive, which contains built in jdk for windows x86-64. If someone is >>>> willing to participate in testing as a user, I would appreciate any help. >>>> > >>>> > Best regards >>>> > >>>> > PS: Note about internal implementation: it introduces a new flag - >>>> EXTENSION, that is equal to 1L<<32. It seems like it takes the last vacant >>>> bit in a long value type that has not been taken by flags. Not sure what >>>> the compiler development community should do about this, but it feels like >>>> it could be an obstacle to new features that might be introduced later. >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Tue Apr 23 16:03:40 2024 From: forax at univ-mlv.fr (Remi Forax) Date: Tue, 23 Apr 2024 18:03:40 +0200 (CEST) Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> Message-ID: <406969038.11938212.1713888220443.JavaMail.zimbra@univ-eiffel.fr> > From: "Brian Goetz" > To: "attila kelemen85" > Cc: "amber-dev" , "Gavin Bierman" > > Sent: Tuesday, April 23, 2024 3:34:16 PM > Subject: Re: New candidate JEP: 468: Derived Record Creation (Preview) [promoted to amber-spec-expert because I think this discussion is quite interresting] > So, a further thing to keep in mind is that currently, adding fields to records > is not even source compatible to begin with. For example, if we have > record Point(int x, int y) { } > And a client uses it in a pattern match: > case Point(int x, int y): > And then we add an `int z` component, the client will break. (When we are able > to declare deconstruction patterns, such a migration could include an XY > pattern as well as constructor, but we are not there yet.) > I think your mail rests on the assumption that it should be fine to modify > records willy-nilly and expect compatibility without a recompile-the-world, but > I think this is a questionable assumption. This is a reasonable assumption. Java is well know to be backward compatible, people will expect any new feature to be backward compatible too. As an example, the way enums were compiled in 1.5 was not bacward compatible if the enums were used in a switch. Later the translation strategy was changed to be backward compatible. One possible quick fix is to restrict the access. For sealed classes, we have restricted the permitted subclasses to be in the same package/same module to avoid such separate compilation issues. Do you think that introducing the same restriction on derived record creation is a good idea ? > Records will likely have features that ordinary classes do not yet have access > to for a while, making such changes risky. Yes, the idea of derived record creation is based on the fact that there is a canonical constructor + a way to deconstruct using accessors (for now). I do not see classes having canonical constructors in the future so yes, this feature is limited to records. R?mi >> On Apr 20, 2024, at 5:49 PM, Attila Kelemen < [ >> mailto:attila.kelemen85 at gmail.com | attila.kelemen85 at gmail.com ] > wrote: >> I have a backward compatibility concern about this JEP. Consider that I have the >> following record: >> `record MyRecord(int x, int y) { }` >> One day I realize that I need that 3rd property which I want to add in a >> backward compatible way, which I will do the following way: >> ``` >> record MyRecord(int x, int y, int z) { >> public MyRecord(int x, int y) { >> this(x, y, 0); >> } >> } >> ``` >> As far as I understand, this will still remain binary compatible. However, if I >> didn't miss anything, then this JEP makes it non-source compatible, because >> someone might wrote the following code: >> ``` >> var obj1 = new MyRecord(1, 2); >> int z = 26; >> var obj2 = obj1 with { y = z; } >> ``` >> If this code is compiled again, then it will compile without error, but while in >> the first version `obj2.y == 26`, now `obj2.y == 0`. This seems rather nasty to >> me because I was once bitten by this in Gradle (I can't recall if it was Groovy >> or Kotlin, but it doesn't really matter), where this is a threat, and you have >> to be very careful adding a new property in plugin extensions with a too >> generic name. Even though Gradle scripts are far less prone to this, since >> those scripts are usually a lot less complicated than normal code. >> I saw in the JEP that on the left hand side of the assignment this issue can't >> happen, but as far as I can see the above problem is not prevented. >> My proposal would be to, instead of shadowing variables, raise a compile time >> error when the property name would shadow another variable. Though that still >> leaves the above example backward incompatible, but at least I would be >> notified of it by the compiler, instead of the compiler silently creating a >> bug. >> Another solution would be that the shadowing is done in the opposite order, and >> the `int z = 26;` shadows the record property (with a potential warning). In >> this case it would be even source compatible, if I didn't miss something. >> Attila -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Tue Apr 23 16:43:22 2024 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Tue, 23 Apr 2024 11:43:22 -0500 Subject: Extension methods In-Reply-To: References: Message-ID: I can no longer resist jumping in with an opinion... :) >From [compiler-dev]: On Tue, Apr 23, 2024 at 10:36?AM Ethan McCue wrote: > What's going to suck to hear, but I think that you'll come around > eventually, is that extension methods do not improve code readability. They > make it harder to read o.method() since it would be ambiguous whether > .method is an instance method or an extension method.* > > They *do* make it easier to write programs though. Without them you do > have to write more characters and you do sometimes have to break up method > chains. > > Historically, given a choice between code readability and code > writability/terseness, Java has erred towards the first. > IMHO this is the heart of the problem. Personally I'm a fanatic about this - readability is 100x more important than writability if you're writing code which is going to be used & maintained for a long time (think enterprise, i.e., Java's sweet spot). FWIW here's my previous rant on this topic https://mail.openjdk.org/pipermail/amber-dev/2022-November/007580.html (that was in a discussion about making all exceptions unchecked...) -Archie -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotan.olexandr at gmail.com Tue Apr 23 16:53:03 2024 From: rotan.olexandr at gmail.com (=?UTF-8?B?0IbQny0yNCDQntC70LXQutGB0LDQvdC00YAg0KDQvtGC0LDQvdGM?=) Date: Tue, 23 Apr 2024 19:53:03 +0300 Subject: Extension methods In-Reply-To: References: Message-ID: Any opinions are most welcome, so don't resist making a remark if you have something to say. I personally made my point about this readability issue in my second mail, and I don't mind to repeat that code should speak for itself with words, not places of definitions. There are only two possibilities: you either read code and understand what it does just by what is written, or you hover over the code and IDE does the trick for you. That's why I honestly don't consider readability as issue. Also, I wouldn't say nested utility method invocations are more readable then extensions, especially with fluent interfaces like builders or streams, when you essentially have to read backwards in processing chain, so in my opinion, extensions on the contrary enhance readability. Noone ever will suffer instead of Arrays.sort(arr) one will see arr.sort(). In fact, second looks better, isn't it? So, in my opinion, in most cases extensions enhance readability. And for the record: I also agree that readability is significantly more important. Offtop about checked exceptions: I guess the biggest deal with them is in lambdas, so maybe they can be ignore in lambdas at least? Haven't really studied this subject, just something from the top of my head On Tue, Apr 23, 2024, 19:43 Archie Cobbs wrote: > I can no longer resist jumping in with an opinion... :) > > From [compiler-dev]: > On Tue, Apr 23, 2024 at 10:36?AM Ethan McCue wrote: > >> What's going to suck to hear, but I think that you'll come around >> eventually, is that extension methods do not improve code readability. They >> make it harder to read o.method() since it would be ambiguous whether >> .method is an instance method or an extension method.* >> >> They *do* make it easier to write programs though. Without them you do >> have to write more characters and you do sometimes have to break up method >> chains. >> >> Historically, given a choice between code readability and code >> writability/terseness, Java has erred towards the first. >> > > IMHO this is the heart of the problem. Personally I'm a fanatic about this > - readability is 100x more important than writability if you're writing > code which is going to be used & maintained for a long time (think > enterprise, i.e., Java's sweet spot). > > FWIW here's my previous rant on this topic > https://mail.openjdk.org/pipermail/amber-dev/2022-November/007580.html > (that was in a discussion about making all exceptions unchecked...) > > -Archie > > -- > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.r.doyle at gmail.com Tue Apr 23 18:37:52 2024 From: p.r.doyle at gmail.com (Patrick Doyle) Date: Tue, 23 Apr 2024 14:37:52 -0400 Subject: Bug: Compact record constructor is missing generic type info on parameters In-Reply-To: References: Message-ID: Hi all, Is this the wrong place to report bugs like this? What would be the right place? Thanks, -- Patrick Doyle p.r.doyle at gmail.com On Mon, Feb 19, 2024 at 8:52?AM Patrick Doyle wrote: > Hi all, > > I have a JUnit5 test case that demonstrates that if you use the compact > constructor syntax in a record, the reflection info will be missing generic > type information. Implicit constructors work fine, as do explicit canonical > constructors. > > I found this on Temurin 21.0.2 and the Adoptium project suggested I post > here. > > The unit test can be found in the Adoptium bug report: > https://github.com/adoptium/adoptium-support/issues/1025 > > Let me know if there's anything I can do to help. > > Thanks, > -- > Patrick Doyle > p.r.doyle at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Tue Apr 23 19:24:46 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Tue, 23 Apr 2024 15:24:46 -0400 Subject: Bug: Compact record constructor is missing generic type info on parameters In-Reply-To: References: Message-ID: You are definitely in the right spot. I don't know enough to be able to help you though. On Tue, Apr 23, 2024 at 2:38?PM Patrick Doyle wrote: > Hi all, > > Is this the wrong place to report bugs like this? What would be the right > place? > > Thanks, > -- > Patrick Doyle > p.r.doyle at gmail.com > > > On Mon, Feb 19, 2024 at 8:52?AM Patrick Doyle wrote: > >> Hi all, >> >> I have a JUnit5 test case that demonstrates that if you use the compact >> constructor syntax in a record, the reflection info will be missing generic >> type information. Implicit constructors work fine, as do explicit canonical >> constructors. >> >> I found this on Temurin 21.0.2 and the Adoptium project suggested I post >> here. >> >> The unit test can be found in the Adoptium bug report: >> https://github.com/adoptium/adoptium-support/issues/1025 >> >> Let me know if there's anything I can do to help. >> >> Thanks, >> -- >> Patrick Doyle >> p.r.doyle at gmail.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From archie.cobbs at gmail.com Tue Apr 23 19:28:56 2024 From: archie.cobbs at gmail.com (Archie Cobbs) Date: Tue, 23 Apr 2024 14:28:56 -0500 Subject: Bug: Compact record constructor is missing generic type info on parameters In-Reply-To: References: Message-ID: Hi Patrick, If it's just a normal bug, file it at https://bugs.java.com/bugdatabase/ and it will eventually get triaged over to https://bugs.openjdk.org/ If it's a language question, e.g., a suggestion for some minor improvement, this list would be appropriate. -Archie On Tue, Apr 23, 2024 at 1:38?PM Patrick Doyle wrote: > Hi all, > > Is this the wrong place to report bugs like this? What would be the right > place? > > Thanks, > -- > Patrick Doyle > p.r.doyle at gmail.com > > > On Mon, Feb 19, 2024 at 8:52?AM Patrick Doyle wrote: > >> Hi all, >> >> I have a JUnit5 test case that demonstrates that if you use the compact >> constructor syntax in a record, the reflection info will be missing generic >> type information. Implicit constructors work fine, as do explicit canonical >> constructors. >> >> I found this on Temurin 21.0.2 and the Adoptium project suggested I post >> here. >> >> The unit test can be found in the Adoptium bug report: >> https://github.com/adoptium/adoptium-support/issues/1025 >> >> Let me know if there's anything I can do to help. >> >> Thanks, >> -- >> Patrick Doyle >> p.r.doyle at gmail.com >> > -- Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.r.doyle at gmail.com Tue Apr 23 19:47:01 2024 From: p.r.doyle at gmail.com (Patrick Doyle) Date: Tue, 23 Apr 2024 15:47:01 -0400 Subject: Bug: Compact record constructor is missing generic type info on parameters In-Reply-To: References: Message-ID: Thanks, Archie. I've submitted the bug report to bugs.java.com. -- Patrick Doyle p.r.doyle at gmail.com On Tue, Apr 23, 2024 at 3:29?PM Archie Cobbs wrote: > Hi Patrick, > > If it's just a normal bug, file it at https://bugs.java.com/bugdatabase/ > and it will eventually get triaged over to https://bugs.openjdk.org/ > > If it's a language question, e.g., a suggestion for some minor > improvement, this list would be appropriate. > > -Archie > > On Tue, Apr 23, 2024 at 1:38?PM Patrick Doyle wrote: > >> Hi all, >> >> Is this the wrong place to report bugs like this? What would be the right >> place? >> >> Thanks, >> -- >> Patrick Doyle >> p.r.doyle at gmail.com >> >> >> On Mon, Feb 19, 2024 at 8:52?AM Patrick Doyle >> wrote: >> >>> Hi all, >>> >>> I have a JUnit5 test case that demonstrates that if you use the compact >>> constructor syntax in a record, the reflection info will be missing generic >>> type information. Implicit constructors work fine, as do explicit canonical >>> constructors. >>> >>> I found this on Temurin 21.0.2 and the Adoptium project suggested I post >>> here. >>> >>> The unit test can be found in the Adoptium bug report: >>> https://github.com/adoptium/adoptium-support/issues/1025 >>> >>> Let me know if there's anything I can do to help. >>> >>> Thanks, >>> -- >>> Patrick Doyle >>> p.r.doyle at gmail.com >>> >> > > -- > Archie L. Cobbs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cushon at google.com Tue Apr 23 19:50:58 2024 From: cushon at google.com (Liam Miller-Cushon) Date: Tue, 23 Apr 2024 12:50:58 -0700 Subject: Bug: Compact record constructor is missing generic type info on parameters In-Reply-To: References: Message-ID: The only noteworthy difference in the class file for the record with the compact constructor is that the parameter has the 'mandated' flag set. I think the behaviour is due to this logic in Executable#getAllGenericParameterTypes (which is used by Parameter#getParameterizedType): // If we hit a synthetic or mandated parameter, // use the non generic parameter info. On Tue, Apr 23, 2024 at 12:47?PM Patrick Doyle wrote: > Thanks, Archie. I've submitted the bug report to bugs.java.com. > -- > Patrick Doyle > p.r.doyle at gmail.com > > > On Tue, Apr 23, 2024 at 3:29?PM Archie Cobbs > wrote: > >> Hi Patrick, >> >> If it's just a normal bug, file it at https://bugs.java.com/bugdatabase/ >> and it will eventually get triaged over to https://bugs.openjdk.org/ >> >> If it's a language question, e.g., a suggestion for some minor >> improvement, this list would be appropriate. >> >> -Archie >> >> On Tue, Apr 23, 2024 at 1:38?PM Patrick Doyle >> wrote: >> >>> Hi all, >>> >>> Is this the wrong place to report bugs like this? What would be the >>> right place? >>> >>> Thanks, >>> -- >>> Patrick Doyle >>> p.r.doyle at gmail.com >>> >>> >>> On Mon, Feb 19, 2024 at 8:52?AM Patrick Doyle >>> wrote: >>> >>>> Hi all, >>>> >>>> I have a JUnit5 test case that demonstrates that if you use the compact >>>> constructor syntax in a record, the reflection info will be missing generic >>>> type information. Implicit constructors work fine, as do explicit canonical >>>> constructors. >>>> >>>> I found this on Temurin 21.0.2 and the Adoptium project suggested I >>>> post here. >>>> >>>> The unit test can be found in the Adoptium bug report: >>>> https://github.com/adoptium/adoptium-support/issues/1025 >>>> >>>> Let me know if there's anything I can do to help. >>>> >>>> Thanks, >>>> -- >>>> Patrick Doyle >>>> p.r.doyle at gmail.com >>>> >>> >> >> -- >> Archie L. Cobbs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From attila.kelemen85 at gmail.com Tue Apr 23 20:55:36 2024 From: attila.kelemen85 at gmail.com (Attila Kelemen) Date: Tue, 23 Apr 2024 22:55:36 +0200 Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> Message-ID: > > So, a further thing to keep in mind is that currently, adding fields to > records is not even source compatible to begin with. For example, if we > have > > record Point(int x, int y) { } > > And a client uses it in a pattern match: > > case Point(int x, int y): > > And then we add an `int z` component, the client will break. (When we are > able to declare deconstruction patterns, such a migration could include an > XY pattern as well as constructor, but we are not there yet.) > To be honest, I didn't think of that scenario. So, thank you for making me more alert to potential issues. > I think your mail rests on the assumption that it should be fine to modify > records willy-nilly and expect compatibility without a recompile-the-world, > but I think this is a questionable assumption. Records will likely have > features that ordinary classes do not yet have access to for a while, > making such changes risky. > While I would love to be "willy-nilly", in this issue that is not really my concern. Because I'm just considering the following scenarios: Scenario A: After changing the dependency, the dependent code is binary compatible, and compiles fine, but does something else than before the change. Scenario B: After changing the dependency, the dependent code is binary compatible, and fails to compile. Scenario C: We had shadowing in the original code, and none of the above happened. Obviously we have to make an assumption that there is a naming ambiguity one way or the other, otherwise there is no difference between shadowing or compile time error. I think I'm uncontroversial by stating that scenario A is far more damaging than scenario B. And also that scenario C is considerably more likely than the other events. So, the question is if scenario C gains enough to allow for scenario A to happen, which is the opinionated part of course. I would say that scenario C almost always hurts readability. As someone might fail to notice the ambiguity. Failing to notice it can be quite likely given that the existence of the property shadowing the local variable (or field for that matter) is not apparent (assuming that the declaration of the record is not very close). So, someone reading such a code has a decent likelihood to conclude that the variable is the local variable being shadowed, when in fact it is the property of the record. While an IDE might color code this (though I would guess they would both be considered local variable by the IDE), but regardless what an IDE might do, I tend to read code in the browser a lot (and I would assume I'm not alone), because I'm just lazy to clone the repo (etc). Now one could argue that shadowing would be beneficial in some cases for lambda parameters, so why not here? But I think the situation is different. The main reason why the lack of shadowing is sometimes bothersome in lambda, is because there are quite a few methods taking an argument, and then passing the same to the lambda (a'la `Map.computeIfAbsent`). That is, the main thing is that it is beneficial, because we know that the variable being shadowed has the same value as the lambda parameter. I don't think this would be anywhere near as common for withers, because not only do we need to have the property have the same value as the shadowed variable (which I think is already unlikely), but we would need to use that value within the withers. Not only this, but in case of lambda we always want to use the lambda parameters over the local to save the cost of capturing the local variable (for no reason). However, in case of withers, we have a simple workaround for the rare case of unavoidable conflict: We can just access that property via the source record (which should have the same performance). An additional argument for the preference of compile time error over shadowing is that if shadowing is chosen for the final version, then there is no going back (even if it laters turns out to be a mistake), while in case compile time error is chosen, there is still an opportunity to introduce shadowing later. So, my conclusion is that compile time error is a far more preferable behavior over shadowing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Wed Apr 24 03:10:42 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Wed, 24 Apr 2024 03:10:42 +0000 Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> Message-ID: > So, my conclusion is that compile time error is a far more preferable behavior over shadowing. I can see how this would be an attractive idea, but its not practical. I think this idea rests on some assumptions that are not true, that you can ?just? rename a conflicting local out of the way, and secondarily that in the event that a record component is shadowed, you can ?just? access it via the record. The first is simply not true; we?ll come back for the second in a bit. Suppose I have some records: record A(int x, B b) { } record B(int x) { } A a = ? a = a with { b with { x = 3; } } In the outer reconstruction block, we have component variables x and b; in the inner block, we have a component variable x that shadows the outer x. Neither of these can be renamed ?out of the way?. Under your proposal, we wouldn?t be able to use nested reconstruction on A at all, which is pretty bad. A similar example is: record Person(String name, Person parent) { } I think these examples shows that ?just make it an error? is a non-starter. As to ?require re-access through the record?, this is a bad road to go down. It is highly error-prone, since you now have two ways to access the same logical thing, but they?re not the same actual thing ? one?s a copy, and it might have been mutated since copying. So accessing the original in this context would be questionable. And further, when we extent reconstruction to classes as well as records, the ?I can just get it from the record? claim becomes no longer true. We didn?t decide to relax the shadowing rules here out of whim; these names _must_ be accessible because they are fixed by the record declaration. From p.r.doyle at gmail.com Wed Apr 24 10:53:52 2024 From: p.r.doyle at gmail.com (Patrick Doyle) Date: Wed, 24 Apr 2024 06:53:52 -0400 Subject: Bug: Compact record constructor is missing generic type info on parameters In-Reply-To: References: Message-ID: Thanks, Liam. Perhaps Java 22 doesn't use the "mandated" flag anymore? Oracle has closed by bug as "not a bug": https://bugs.java.com/bugdatabase/view_bug?bug_id=JDK-8331026 > The observations on Windows 11: > JDK 22: Passed all tests. > Close as not an issue. Given that Gradle doesn't support Java 22 yet, this is a bit tricky for me to try at the moment. I know when I build it with Java 21 and then run it with 22, it still fails, but based on the observations from Oracle, I assume it would pass if I compiled the class with Java 22. Question: would it make sense to back-port the fix from Java 22 to the LTS versions (17 and 21)? If so, who do I contact about that? Thanks again, -- Patrick Doyle p.r.doyle at gmail.com On Tue, Apr 23, 2024 at 3:51?PM Liam Miller-Cushon wrote: > The only noteworthy difference in the class file for the record with the > compact constructor is that the parameter has the 'mandated' flag set. > > I think the behaviour is due to this logic > in > Executable#getAllGenericParameterTypes (which is used by > Parameter#getParameterizedType): > > // If we hit a synthetic or mandated parameter, > // use the non generic parameter info. > > On Tue, Apr 23, 2024 at 12:47?PM Patrick Doyle > wrote: > >> Thanks, Archie. I've submitted the bug report to bugs.java.com. >> -- >> Patrick Doyle >> p.r.doyle at gmail.com >> >> >> On Tue, Apr 23, 2024 at 3:29?PM Archie Cobbs >> wrote: >> >>> Hi Patrick, >>> >>> If it's just a normal bug, file it at https://bugs.java.com/bugdatabase/ >>> and it will eventually get triaged over to https://bugs.openjdk.org/ >>> >>> If it's a language question, e.g., a suggestion for some minor >>> improvement, this list would be appropriate. >>> >>> -Archie >>> >>> On Tue, Apr 23, 2024 at 1:38?PM Patrick Doyle >>> wrote: >>> >>>> Hi all, >>>> >>>> Is this the wrong place to report bugs like this? What would be the >>>> right place? >>>> >>>> Thanks, >>>> -- >>>> Patrick Doyle >>>> p.r.doyle at gmail.com >>>> >>>> >>>> On Mon, Feb 19, 2024 at 8:52?AM Patrick Doyle >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> I have a JUnit5 test case that demonstrates that if you use the >>>>> compact constructor syntax in a record, the reflection info will be missing >>>>> generic type information. Implicit constructors work fine, as do explicit >>>>> canonical constructors. >>>>> >>>>> I found this on Temurin 21.0.2 and the Adoptium project suggested I >>>>> post here. >>>>> >>>>> The unit test can be found in the Adoptium bug report: >>>>> https://github.com/adoptium/adoptium-support/issues/1025 >>>>> >>>>> Let me know if there's anything I can do to help. >>>>> >>>>> Thanks, >>>>> -- >>>>> Patrick Doyle >>>>> p.r.doyle at gmail.com >>>>> >>>> >>> >>> -- >>> Archie L. Cobbs >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Wed Apr 24 11:49:09 2024 From: forax at univ-mlv.fr (Remi Forax) Date: Wed, 24 Apr 2024 13:49:09 +0200 (CEST) Subject: Bug: Compact record constructor is missing generic type info on parameters In-Reply-To: References: Message-ID: <813574949.12407006.1713959349186.JavaMail.zimbra@univ-eiffel.fr> > From: "Patrick Doyle" > To: "Liam Miller-Cushon" > Cc: "Archie Cobbs" , "amber-dev" > Sent: Wednesday, April 24, 2024 12:53:52 PM > Subject: Re: Bug: Compact record constructor is missing generic type info on > parameters > Thanks, Liam. Perhaps Java 22 doesn't use the "mandated" flag anymore? Oracle > has closed by bug as "not a bug": [ > https://bugs.java.com/bugdatabase/view_bug?bug_id=JDK-8331026 | > https://bugs.java.com/bugdatabase/view_bug?bug_id=JDK-8331026 ] > > The observations on Windows 11: > > JDK 22: Passed all tests. > > Close as not an issue. > Given that Gradle doesn't support Java 22 yet, this is a bit tricky for me to > try at the moment. I know when I build it with Java 21 and then run it with 22, > it still fails, but based on the observations from Oracle, I assume it would > pass if I compiled the class with Java 22. gradle 8.7 supports Java 22 https://docs.gradle.org/current/release-notes.html#support-for-building-projects-with-java-22 > Question: would it make sense to back-port the fix from Java 22 to the LTS > versions (17 and 21)? If so, who do I contact about that? > Thanks again, > -- > Patrick Doyle > [ mailto:p.r.doyle at gmail.com | p.r.doyle at gmail.com ] regards, R?mi > On Tue, Apr 23, 2024 at 3:51 PM Liam Miller-Cushon < [ mailto:cushon at google.com > | cushon at google.com ] > wrote: >> The only noteworthy difference in the class file for the record with the compact >> constructor is that the parameter has the 'mandated' flag set. >> I think the behaviour is due to [ >> https://github.com/openjdk/jdk/blob/09b88098ff544fec1a4e94bfbbdc21b6c8433abb/src/java.base/share/classes/java/lang/reflect/Executable.java#L344-L345 >> | this logic ] in Executable#getAllGenericParameterTypes (which is used by >> Parameter#getParameterizedType): >> // If we hit a synthetic or mandated parameter, >> // use the non generic parameter info. >> On Tue, Apr 23, 2024 at 12:47 PM Patrick Doyle < [ mailto:p.r.doyle at gmail.com | >> p.r.doyle at gmail.com ] > wrote: >>> Thanks, Archie. I've submitted the bug report to [ http://bugs.java.com/ | >>> bugs.java.com ] . >>> -- >>> Patrick Doyle >>> [ mailto:p.r.doyle at gmail.com | p.r.doyle at gmail.com ] >>> On Tue, Apr 23, 2024 at 3:29 PM Archie Cobbs < [ mailto:archie.cobbs at gmail.com | >>> archie.cobbs at gmail.com ] > wrote: >>>> Hi Patrick, >>>> If it's just a normal bug, file it at [ https://bugs.java.com/bugdatabase/ | >>>> https://bugs.java.com/bugdatabase/ ] and it will eventually get triaged over to >>>> [ https://bugs.openjdk.org/ | https://bugs.openjdk.org/ ] >>>> If it's a language question, e.g., a suggestion for some minor improvement, this >>>> list would be appropriate. >>>> -Archie >>>> On Tue, Apr 23, 2024 at 1:38 PM Patrick Doyle < [ mailto:p.r.doyle at gmail.com | >>>> p.r.doyle at gmail.com ] > wrote: >>>>> Hi all, >>>>> Is this the wrong place to report bugs like this? What would be the right place? >>>>> Thanks, >>>>> -- >>>>> Patrick Doyle >>>>> [ mailto:p.r.doyle at gmail.com | p.r.doyle at gmail.com ] >>>>> On Mon, Feb 19, 2024 at 8:52 AM Patrick Doyle < [ mailto:p.r.doyle at gmail.com | >>>>> p.r.doyle at gmail.com ] > wrote: >>>>>> Hi all, >>>>>> I have a JUnit5 test case that demonstrates that if you use the compact >>>>>> constructor syntax in a record, the reflection info will be missing generic >>>>>> type information. Implicit constructors work fine, as do explicit canonical >>>>>> constructors. >>>>>> I found this on Temurin 21.0.2 and the Adoptium project suggested I post here. >>>>>> The unit test can be found in the Adoptium bug report: [ >>>>>> https://github.com/adoptium/adoptium-support/issues/1025 | >>>>>> https://github.com/adoptium/adoptium-support/issues/1025 ] >>>>>> Let me know if there's anything I can do to help. >>>>>> Thanks, >>>>>> -- >>>>>> Patrick Doyle >>>>>> [ mailto:p.r.doyle at gmail.com | p.r.doyle at gmail.com ] >>>> -- >>>> Archie L. Cobbs -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Wed Apr 24 16:42:08 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Wed, 24 Apr 2024 16:42:08 +0000 Subject: Extension methods In-Reply-To: References: Message-ID: 5. One of the answers from the first thread you provided (https://stackoverflow.com/a/29494337) states that omitting extension methods is a "philosophical choice", as API developers should define the API. I have to strongly disagree with that. Extension methods are NOT part of the API, they are EXTENSION to it. It does not breach encapsulation as it can't access any internal members of API classes. You can convince yourself that you like it, but that doesn?t change the fact that it is a deliberate attempt to blur the boundary of the API. And again, you might think that is fine, but we do not. The members of String, and therefore the methods you can invoke through a String receiver, should be defined by the String class (and its supertypes.). Invoking a method on a receiver: aString.encrypt() where String has no method called encrypt(), is muddying the user?s view of what the API of String is. I get that you are willing to say ?the API of String is the methods in String, plus any methods I care to create the illusion of belonging to String?, but that looks like monkey-patching to us. Worse, this ?method call? might mean one thing in one context, and another thing in another context. Extensions have to be imported explicitly (not the containing class), so they are explicitly mentioned in the imports list. Also, are utility methods also breaching this rule then? The only real difference I see is differences in notation, and extension methods are clearly much more concise. Concision is not the goal of programming. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cushon at google.com Wed Apr 24 17:38:47 2024 From: cushon at google.com (Liam Miller-Cushon) Date: Wed, 24 Apr 2024 10:38:47 -0700 Subject: Bug: Compact record constructor is missing generic type info on parameters In-Reply-To: <813574949.12407006.1713959349186.JavaMail.zimbra@univ-eiffel.fr> References: <813574949.12407006.1713959349186.JavaMail.zimbra@univ-eiffel.fr> Message-ID: I can still reproduce the issue on JDK 22 and 23 EA, I re-opened the bug. On Wed, Apr 24, 2024 at 4:49?AM Remi Forax wrote: > > > ------------------------------ > > *From: *"Patrick Doyle" > *To: *"Liam Miller-Cushon" > *Cc: *"Archie Cobbs" , "amber-dev" < > amber-dev at openjdk.org> > *Sent: *Wednesday, April 24, 2024 12:53:52 PM > *Subject: *Re: Bug: Compact record constructor is missing generic type > info on parameters > > Thanks, Liam. Perhaps Java 22 doesn't use the "mandated" flag anymore? > Oracle has closed by bug as "not a bug": > https://bugs.java.com/bugdatabase/view_bug?bug_id=JDK-8331026 > > > The observations on Windows 11: > > JDK 22: Passed all tests. > > Close as not an issue. > > Given that Gradle doesn't support Java 22 yet, this is a bit tricky for me > to try at the moment. I know when I build it with Java 21 and then run it > with 22, it still fails, but based on the observations from Oracle, I > assume it would pass if I compiled the class with Java 22. > > > gradle 8.7 supports Java 22 > > https://docs.gradle.org/current/release-notes.html#support-for-building-projects-with-java-22 > > > > Question: would it make sense to back-port the fix from Java 22 to the LTS > versions (17 and 21)? If so, who do I contact about that? > > Thanks again, > -- > Patrick Doyle > p.r.doyle at gmail.com > > > regards, > R?mi > > > On Tue, Apr 23, 2024 at 3:51?PM Liam Miller-Cushon > wrote: > >> The only noteworthy difference in the class file for the record with the >> compact constructor is that the parameter has the 'mandated' flag set. >> >> I think the behaviour is due to this logic >> in >> Executable#getAllGenericParameterTypes (which is used by >> Parameter#getParameterizedType): >> >> // If we hit a synthetic or mandated parameter, >> // use the non generic parameter info. >> >> On Tue, Apr 23, 2024 at 12:47?PM Patrick Doyle >> wrote: >> >>> Thanks, Archie. I've submitted the bug report to bugs.java.com. >>> -- >>> Patrick Doyle >>> p.r.doyle at gmail.com >>> >>> >>> On Tue, Apr 23, 2024 at 3:29?PM Archie Cobbs >>> wrote: >>> >>>> Hi Patrick, >>>> >>>> If it's just a normal bug, file it at >>>> https://bugs.java.com/bugdatabase/ and it will eventually get triaged >>>> over to https://bugs.openjdk.org/ >>>> >>>> If it's a language question, e.g., a suggestion for some minor >>>> improvement, this list would be appropriate. >>>> >>>> -Archie >>>> >>>> On Tue, Apr 23, 2024 at 1:38?PM Patrick Doyle >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> Is this the wrong place to report bugs like this? What would be the >>>>> right place? >>>>> >>>>> Thanks, >>>>> -- >>>>> Patrick Doyle >>>>> p.r.doyle at gmail.com >>>>> >>>>> >>>>> On Mon, Feb 19, 2024 at 8:52?AM Patrick Doyle >>>>> wrote: >>>>> >>>>>> Hi all, >>>>>> >>>>>> I have a JUnit5 test case that demonstrates that if you use the >>>>>> compact constructor syntax in a record, the reflection info will be missing >>>>>> generic type information. Implicit constructors work fine, as do explicit >>>>>> canonical constructors. >>>>>> >>>>>> I found this on Temurin 21.0.2 and the Adoptium project suggested I >>>>>> post here. >>>>>> >>>>>> The unit test can be found in the Adoptium bug report: >>>>>> https://github.com/adoptium/adoptium-support/issues/1025 >>>>>> >>>>>> Let me know if there's anything I can do to help. >>>>>> >>>>>> Thanks, >>>>>> -- >>>>>> Patrick Doyle >>>>>> p.r.doyle at gmail.com >>>>>> >>>>> >>>> >>>> -- >>>> Archie L. Cobbs >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Wed Apr 24 17:42:31 2024 From: ron.pressler at oracle.com (Ron Pressler) Date: Wed, 24 Apr 2024 17:42:31 +0000 Subject: Extension methods In-Reply-To: References: Message-ID: <2526FC0A-81D7-46D0-8E83-D187A8BB9AF5@oracle.com> > On 23 Apr 2024, at 15:57, ??-24 ????????? ?????? wrote: > > Community craves for it and asks all the time, and I'm sure If it was proposed as JEP it will be gladly accepted by Java devs. As for me, that's the most important thing, not decisions that hand been made a long time ago in the past. First, the most important thing is to increase the value Java has overall, which does not necessarily always coincide with what developers say they want (e.g. developers rarely ask for security features and often oppose security improvements, while if you look at the value such features create ? by avoiding damage ? you?ll see they are sometimes among the most valuable features). Second, there is a big difference between what you remember seeing some people ask for and what the majority want; this is somewhat similar to the difference between what you see people ask for in protests or in letter-writing campaigns and what the majority actually want. Java has achieved more success than the languages you named ? perhaps more than all of them combined ? and it wouldn?t have been that way for so long if we actually didn?t give our users what they need. If you want to change the course of the platform rather than just voice your opinion ? which is perfectly valid but contains no new information; as you can imagine, we hear opinions for or against things all the time and extension methods are old news indeed ? try and report about problems you experience, not ask for features you wish Java had. Such reports carry actual signal, and their impact is huge. If I said that a single experience report has an impact on outcome of 50x that of a request or opinion it may be an *underestimate*. If 1% of Java users were enthusiastic about extension methods, and 1% of that 1% wrote to us about it, it would be a thousand emails/social media posts, which would carry little information as it is a reasonable working hypothesis that for just about any feature you could find 1% who are excitedly in favour. On the other hand, an experience report from people doing different kinds of work with Java is more likely to contain some new information. As Brian likes to say, we want to learn what we don?t already know. If some feature is not on the roadmap, it will likely be put on the roadmap when we learn something new about the market. ? Ron From kan.izh at gmail.com Wed Apr 24 20:23:25 2024 From: kan.izh at gmail.com (Anatoly Kupriyanov) Date: Wed, 24 Apr 2024 21:23:25 +0100 Subject: Extension methods In-Reply-To: References: Message-ID: In my mind, the main motivation of extension methods is to rearrange the syntax tree, replacing nesting calls with chaining, prefix with postfix. I.e., replacing: f1(f2(f3(x, "p3"), "p2", "p22"), "p1") with x.f3("p3").f2("p2", "p22").f1("p1") It significantly untangles the code and makes it more readable. Good example of good usage for it, could be the java-streams api, to add user defined stream operations. as for "the illusion of belonging" it could be addressed by introducing some special operator instead of dot to highlight the difference, e.g. something like: x?f3("p3")?f2("p2", "p22")?f1("p1") On Wed, 24 Apr 2024 at 20:07, Brian Goetz wrote: > > 5. One of the answers from the first thread you provided ( > https://stackoverflow.com/a/29494337) states that omitting extension > methods is a "philosophical choice", as API developers should define > the API. I have to strongly disagree with that. Extension methods are NOT > part of the API, they are EXTENSION to it. It does not breach > encapsulation as it can't access any internal members of API classes. > > > You can convince yourself that you like it, but that doesn?t change the > fact that it is a deliberate attempt to blur the boundary of the API. And > again, you might think that is fine, but we do not. The members of String, > and therefore the methods you can invoke through a String receiver, should > be defined by the String class (and its supertypes.). Invoking a method on > a receiver: > > aString.encrypt() > > where String has no method called encrypt(), is muddying the user?s view > of what the API of String is. I get that you are willing to say ?the API > of String is the methods in String, plus any methods I care to create the > illusion of belonging to String?, but that looks like monkey-patching to > us. Worse, this ?method call? might mean one thing in one context, and > another thing in another context. > > Extensions have to be imported explicitly (not the containing class), so > they are explicitly mentioned in the imports list. Also, are utility > methods also breaching this rule then? The only real difference I see is > differences in notation, and extension methods are clearly much more > concise. > > > Concision is not the goal of programming. > > > -- WBR, Anatoly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From attila.kelemen85 at gmail.com Wed Apr 24 20:30:10 2024 From: attila.kelemen85 at gmail.com (Attila Kelemen) Date: Wed, 24 Apr 2024 22:30:10 +0200 Subject: New candidate JEP: 468: Derived Record Creation (Preview) In-Reply-To: References: <20240228200401.D42EB6C2F78@eggemoggin.niobe.net> Message-ID: > > I can see how this would be an attractive idea, but its not practical. I > think this idea rests on some assumptions that are not true, that you can > ?just? rename a conflicting local out of the way, and secondarily that in > the event that a record component is shadowed, you can ?just? access it via > the record. The first is simply not true; we?ll come back for the second > in a bit. > > Suppose I have some records: > > record A(int x, B b) { } > record B(int x) { } > > A a = ? > a = a with { b with { x = 3; } } > > In the outer reconstruction block, we have component variables x and b; in > the inner block, we have a component variable x that shadows the outer x. > Neither of these can be renamed ?out of the way?. Under your proposal, we > wouldn?t be able to use nested reconstruction on A at all, which is pretty > bad. > I admit, I should have been clearer here, because there is an asymmetry here. I'm not talking about the left hand side (your example above should compile fine). In fact, I believe the left hand side is addressed perfectly by the JEP. That is, on the left hand side it does not allow anything outside the curlies of the `with` (so there can be no problem with that). That is, the compile time error would not make nesting impossible. So, the question is how the expression "x" is treated. In fact, such a nested case is the most error prone and requires being explicit about your choice of which "x" expression are we talking about. For instance, let's look at a modified example (to contain such conflicts): ``` record A(int x, int y, A other) { } A a = ...; // assume a.other != null a = a with { other = other.with { x = y; } } ``` I think in the above nested example, it is less confusing if we disambiguate by writing `other.y`. The reason being is that if someone reads the code then there is always a decent chance to notice a variable in an outer scope and then erroneously just assume the variable is what you have noticed. Especially, when different types and many properties are involved (or simply due to asymmetric familiarity with the types involved). A similar example is: > > record Person(String name, Person parent) { } > > I think these examples shows that ?just make it an error? is a > non-starter. > > As to ?require re-access through the record?, this is a bad road to go > down. It is highly error-prone, since you now have two ways to access the > same logical thing, but they?re not the same actual thing ? one?s a copy, > and it might have been mutated since copying. So accessing the original in > this context would be questionable. And further, when we extent > reconstruction to classes as well as records, the ?I can just get it from > the record? claim becomes no longer true. > Just to make it explicit, I'm assuming you are referring to a situation like this (and also assuming there is a name conflict with the outer scope): ``` a with { x = y; y = x; // not that same as `a.x` } ``` Though this can be a source of error for sure, but I think this can be a source of error both ways with roughly equal likelihood (so balances out), because in the above example, maybe you wanted to exchange `x` and `y` and this was a mistake. In fact, my opinion is that when you are not referring to the original value, then it would be a better practice to have the value in an intermediate variable anyway. Because most certainly nobody would question the intent of this: ``` a with { int commonValue = y; x = commonValue ; y = commonValue; } ``` That is, I would certainly waste more of my time to check if the intent of the `x = y; y = x;` block is what it does. Of course, as I hinted previously there can be examples where there is probably no such confusion, like with: ``` a with { x = 5; y = x; } ``` However, my point here is that I don't see that using `original.myProperty` is in general more error prone. Especially since this is only a question if there is a naming conflict. Also, there is one extra argument that I forgot to write down in my previous email. I would only expect the problem I originally wrote to happen for projects that are large evolving production systems (since it requires independent dependency upgrades). Now, those systems are probably not keen to use preview features (since it would be horrible for them to back out from the said preview feature). So, if my stated problem is indeed a problem that will not be experienced by people trying out this in preview. However, it is more likely that people will run into the shadowing issues (since that requires less unusual situations). And if the compilation error turns out to be a burden, then you can't just not notice the pain. However, in the other case, it is very unusual for someone to remember that "if this was a compile time error, I would have been very sad" (the actual pain is just a lot more memorable). So, I think making it a compile time error for preview brings the additional benefit of seeing if the compile time error actually hurts. Anyway, I think I wrote down all of my arguments now. So, I don't think I will have any more notes on the topic. I personally think that it is overall not terrible either way (definitely wouldn't be the worst unfixable mistake of Java, if shadowing indeed turned out to be a mistake), but I do think it would be better with the compile time error. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotan.olexandr at gmail.com Wed Apr 24 20:55:50 2024 From: rotan.olexandr at gmail.com (=?UTF-8?B?0IbQny0yNCDQntC70LXQutGB0LDQvdC00YAg0KDQvtGC0LDQvdGM?=) Date: Wed, 24 Apr 2024 23:55:50 +0300 Subject: Extension methods In-Reply-To: References: Message-ID: Actually, that what I have thought about. This is a good way to find compromise, I thought about syntax like obj>ext(), which implies object is "injecting" into utility method. However, this introduces a new syntax, and as I reckon, Java is going for the most simple syntax possible, and this might still clash with it. Other thing that I have thought of is that some extensions (like previously mentioned LinQ), actually *should* look like an API itself, as they are extension provided by the same team that made an API. There could be some ways to achieve this like allowing dot syntax for extensions from same module and arrow from any outer, so there are something like trusted and untrusted extensions, but this adds new concepts to language, which is also udesirable. Although, I am glad to see that this idea is still not discarded completely. I agree that the place where extensions are the most needed is fluent apis like streams, that could contain many operations in chain, and nesting makes code barely readable. As I understand, readability is what Java going for, so with some adjustments, such feature could enhance readability in some cases while not damaging it in another. The syntax Anatoly proposed could be a point where there are virtually aren't any major cons, while preserving all the pros. On Wed, Apr 24, 2024, 23:32 ??-24 ????????? ?????? wrote: > Actually, that what I have thought about. This is a good way to find > compromise, I thought about syntax like obj>ext(), which implies object is > "injecting" into utility method. However, this introduces a new syntax, and > as I reckon, Java is going for the most simple syntax possible, and this > might still clash with it. > > Other thing that I have thought of is that some extensions (like > previously mentioned LinQ), actually *should* look like an API itself, as > they are extension provided by the same team that made an API. There could > be some ways to achieve this like allowing dot syntax for extensions from > same module and arrow from any outer, so there are something like trusted > and untrusted extensions, but this adds new concepts to language, which is > also udesirable. > > Although, I am glad to see that this idea is still not discarded > completely. I agree that the place where extensions are the most needed is > fluent apis like streams, that could contain many operations in chain, and > nesting makes code barely readable. As I understand, readability is what > Java going for, so with some adjustments, such feature could enhance > readability in some cases while not damaging it in another. The syntax > Anatoly proposed could be a point where there are virtually aren't any > major cons, while preserving all the pros. > > > On Wed, Apr 24, 2024, 23:23 Anatoly Kupriyanov wrote: > >> In my mind, the main motivation of extension methods is to rearrange the >> syntax tree, replacing nesting calls with chaining, prefix with postfix. >> I.e., replacing: >> f1(f2(f3(x, "p3"), "p2", "p22"), "p1") >> with >> x.f3("p3").f2("p2", "p22").f1("p1") >> It significantly untangles the code and makes it more readable. Good >> example of good usage for it, could be the java-streams api, to add user >> defined stream operations. >> >> as for "the illusion of belonging" it could be addressed by introducing >> some special operator instead of dot to highlight the difference, e.g. >> something like: >> x?f3("p3")?f2("p2", "p22")?f1("p1") >> >> On Wed, 24 Apr 2024 at 20:07, Brian Goetz wrote: >> >>> >>> 5. One of the answers from the first thread you provided ( >>> https://stackoverflow.com/a/29494337) states that omitting extension >>> methods is a "philosophical choice", as API developers should define >>> the API. I have to strongly disagree with that. Extension methods are NOT >>> part of the API, they are EXTENSION to it. It does not breach >>> encapsulation as it can't access any internal members of API classes. >>> >>> >>> You can convince yourself that you like it, but that doesn?t change the >>> fact that it is a deliberate attempt to blur the boundary of the API. And >>> again, you might think that is fine, but we do not. The members of String, >>> and therefore the methods you can invoke through a String receiver, should >>> be defined by the String class (and its supertypes.). Invoking a method on >>> a receiver: >>> >>> aString.encrypt() >>> >>> where String has no method called encrypt(), is muddying the user?s view >>> of what the API of String is. I get that you are willing to say ?the API >>> of String is the methods in String, plus any methods I care to create the >>> illusion of belonging to String?, but that looks like monkey-patching to >>> us. Worse, this ?method call? might mean one thing in one context, and >>> another thing in another context. >>> >>> Extensions have to be imported explicitly (not the containing class), so >>> they are explicitly mentioned in the imports list. Also, are utility >>> methods also breaching this rule then? The only real difference I see is >>> differences in notation, and extension methods are clearly much more >>> concise. >>> >>> >>> Concision is not the goal of programming. >>> >>> >>> >> >> -- >> WBR, Anatoly. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vemana.github at gmail.com Thu Apr 25 04:21:24 2024 From: vemana.github at gmail.com (Subrahmanyam V) Date: Thu, 25 Apr 2024 09:51:24 +0530 Subject: Extension methods Message-ID: >> On the other hand, an experience report from people doing different kinds of work with Java is more likely to contain some new information. I don't know if this is new information but in my practice, the most common need for extension methods is with protobuf generated code. For example, consider a protobuf that defines a Point type with x and y coordinates. The generated code will have getters/setters/builders for x and y. But sometimes, I wish to add derived properties, say Point.distanceFromOrigin() { return sqrt(x*x+y*y);} . Both Guava's AutoValue and Java Records enable such 'derived' properties but there are no good options for generated code to my knowledge. Extension methods can bridge that gap when changing generated code is not practical. That said, based on a little bit of experience with Groovy's metaclass based additions, I am quite skeptical of extension methods for the purposes of changing an API's shape. So, even if extension methods were available in Java, I'd likely restrict usage to just the case above. As a contrast to extension methods, one feature I'd use without restriction (if it existed) is memoizing derived properties in records (like Guava's AutoValue' Memoized annotation) particularly since memoization has important performance implications. This contrast clarifies where my personal "line" is - extension methods have global codebase-wide impact and are easy to misuse, but property memoization is local impact with a well-defined value proposition. So, yay to the latter but nay/meh to the former. -------------- next part -------------- An HTML attachment was scrubbed... URL: From attila.kelemen85 at gmail.com Thu Apr 25 21:39:40 2024 From: attila.kelemen85 at gmail.com (Attila Kelemen) Date: Thu, 25 Apr 2024 23:39:40 +0200 Subject: JEP 468 - binary compatibility clarifications Message-ID: Hi, Reading the JEP I'm unsure about some behavior when the record is recompiled with an additional component, while the code containing the `with` block is not. Can this be clarified in the JEP? Or is this undefined for now? For better understanding, let me write down a specific example: MyRecord.java (v1) ``` public record MyRecord(int x, int y) { } ``` MyRecord.java (v2) ``` public record MyRecord(int x, int y, int z) { } ``` MyRecord.java (v3) ``` public record MyRecord(int x, int y, int z) { public MyRecord(int x, int y) { this(x, y, 0); } } ``` Let's suppose we have the following class which we compile against v1 of `MyRecord`, and then never recompile: ``` public class MyClass { public static MyRecord adjustX(MyRecord src) { return src with { x = 9; } } } ``` Now my question is: What is the output of the following code? ``` System.out.println(MyClass.adjustX(new MyRecord(1, 2, 3)).z()); ``` A, When running with v2 of `MyRecord` on the classpath B, When running with v3 of `MyRecord` on the classpath. Thanks, Attila -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotan.olexandr at gmail.com Fri Apr 26 14:40:41 2024 From: rotan.olexandr at gmail.com (=?UTF-8?B?0IbQny0yNCDQntC70LXQutGB0LDQvdC00YAg0KDQvtGC0LDQvdGM?=) Date: Fri, 26 Apr 2024 17:40:41 +0300 Subject: Extension methods In-Reply-To: References: Message-ID: I have tried writing some code and utilizing extension methods as much as possible in a production-like application to explore pros and cons of them in depth. Here I would like to share my experience along with some ideas that I came up with based on it. One thing I have to admit, is that they turned out much less usable then it seemed to be for me. In fact, I found a major blind spot that could be covered by their common design in most languages: they are virtually incompatible with DI containers due to their static nature. Essentially what my project is doing is taking an equation and parses it into polynomial form and then solves it. I was trying to separate equation validation logic from actual parsing as both of them are really complex using JSR 303 Bean Validation API. Well, as you might expect, there was no way to make this work together, because static methods cant be validated. After some thinking I figured that besides bean validation, static methods are also incompatible with runtime-weaved AOP, which is widely used in industry (annotations like @Transactional, @Cachable, @Async etc. (names are listed for Spring framework, but I'm sure there are things like that in Jakarta EE and other frameworks too)). For example, if one tries to implement something like Active Record using extension methods, transaction or async support provided by the framework is just unavailable. That led me to some ideas about extensions by members of class (this was kind of inspired by how delegation in kotlin works). I haven't thought about syntax, but it could be something like "private StringUtils utils = ... extends String". This solves a few problems: 1. Extensions become overridable 2. Extensions specified explicitly, however, still doest completely fix issue with blurring edges between API and extension. 3. This instance-based extension enables integration with DI containers which is a base of modern frameworks. However, there is also a cons coming with this approach like the fact that many utility classes aren't meant to be instantiated and their constructors might be not available. This is bypassable, but clearly not desired. I want to make things clear that I am NOT proposing this API because this is just some half-expromt conceptions and in a very raw state. However, I haven't heard this suggestion anywhere so I thought this idea is worth sharing with the community. ??, 24 ???. 2024??. ? 23:56, ??-24 ????????? ?????? < rotan.olexandr at gmail.com>: > Sorry for duplicate, Gmail for some reason decided to exclude amber-dev > from recipients initially > > On Wed, Apr 24, 2024, 23:55 ??-24 ????????? ?????? < > rotan.olexandr at gmail.com> wrote: > >> Actually, that what I have thought about. This is a good way to find >> compromise, I thought about syntax like obj>ext(), which implies object is >> "injecting" into utility method. However, this introduces a new syntax, and >> as I reckon, Java is going for the most simple syntax possible, and this >> might still clash with it. >> >> Other thing that I have thought of is that some extensions (like >> previously mentioned LinQ), actually *should* look like an API itself, as >> they are extension provided by the same team that made an API. There could >> be some ways to achieve this like allowing dot syntax for extensions from >> same module and arrow from any outer, so there are something like trusted >> and untrusted extensions, but this adds new concepts to language, which is >> also udesirable. >> >> Although, I am glad to see that this idea is still not discarded >> completely. I agree that the place where extensions are the most needed is >> fluent apis like streams, that could contain many operations in chain, and >> nesting makes code barely readable. As I understand, readability is what >> Java going for, so with some adjustments, such feature could enhance >> readability in some cases while not damaging it in another. The syntax >> Anatoly proposed could be a point where there are virtually aren't any >> major cons, while preserving all the pros. >> >> On Wed, Apr 24, 2024, 23:32 ??-24 ????????? ?????? < >> rotan.olexandr at gmail.com> wrote: >> >>> Actually, that what I have thought about. This is a good way to find >>> compromise, I thought about syntax like obj>ext(), which implies object is >>> "injecting" into utility method. However, this introduces a new syntax, and >>> as I reckon, Java is going for the most simple syntax possible, and this >>> might still clash with it. >>> >>> Other thing that I have thought of is that some extensions (like >>> previously mentioned LinQ), actually *should* look like an API itself, as >>> they are extension provided by the same team that made an API. There could >>> be some ways to achieve this like allowing dot syntax for extensions from >>> same module and arrow from any outer, so there are something like trusted >>> and untrusted extensions, but this adds new concepts to language, which is >>> also udesirable. >>> >>> Although, I am glad to see that this idea is still not discarded >>> completely. I agree that the place where extensions are the most needed is >>> fluent apis like streams, that could contain many operations in chain, and >>> nesting makes code barely readable. As I understand, readability is what >>> Java going for, so with some adjustments, such feature could enhance >>> readability in some cases while not damaging it in another. The syntax >>> Anatoly proposed could be a point where there are virtually aren't any >>> major cons, while preserving all the pros. >>> >>> >>> On Wed, Apr 24, 2024, 23:23 Anatoly Kupriyanov >>> wrote: >>> >>>> In my mind, the main motivation of extension methods is to rearrange >>>> the syntax tree, replacing nesting calls with chaining, prefix with postfix. >>>> I.e., replacing: >>>> f1(f2(f3(x, "p3"), "p2", "p22"), "p1") >>>> with >>>> x.f3("p3").f2("p2", "p22").f1("p1") >>>> It significantly untangles the code and makes it more readable. Good >>>> example of good usage for it, could be the java-streams api, to add user >>>> defined stream operations. >>>> >>>> as for "the illusion of belonging" it could be addressed by introducing >>>> some special operator instead of dot to highlight the difference, e.g. >>>> something like: >>>> x?f3("p3")?f2("p2", "p22")?f1("p1") >>>> >>>> On Wed, 24 Apr 2024 at 20:07, Brian Goetz >>>> wrote: >>>> >>>>> >>>>> 5. One of the answers from the first thread you provided ( >>>>> https://stackoverflow.com/a/29494337) states that omitting extension >>>>> methods is a "philosophical choice", as API developers should define >>>>> the API. I have to strongly disagree with that. Extension methods are NOT >>>>> part of the API, they are EXTENSION to it. It does not breach >>>>> encapsulation as it can't access any internal members of API classes. >>>>> >>>>> >>>>> You can convince yourself that you like it, but that doesn?t change >>>>> the fact that it is a deliberate attempt to blur the boundary of the API. >>>>> And again, you might think that is fine, but we do not. The members of >>>>> String, and therefore the methods you can invoke through a String receiver, >>>>> should be defined by the String class (and its supertypes.). Invoking a >>>>> method on a receiver: >>>>> >>>>> aString.encrypt() >>>>> >>>>> where String has no method called encrypt(), is muddying the user?s >>>>> view of what the API of String is. I get that you are willing to say ?the >>>>> API of String is the methods in String, plus any methods I care to create >>>>> the illusion of belonging to String?, but that looks like monkey-patching >>>>> to us. Worse, this ?method call? might mean one thing in one context, and >>>>> another thing in another context. >>>>> >>>>> Extensions have to be imported explicitly (not the containing class), >>>>> so they are explicitly mentioned in the imports list. Also, are utility >>>>> methods also breaching this rule then? The only real difference I see is >>>>> differences in notation, and extension methods are clearly much more >>>>> concise. >>>>> >>>>> >>>>> Concision is not the goal of programming. >>>>> >>>>> >>>>> >>>> >>>> -- >>>> WBR, Anatoly. >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotanolexandr842 at gmail.com Fri Apr 26 19:00:02 2024 From: rotanolexandr842 at gmail.com (Olexandr Rotan) Date: Fri, 26 Apr 2024 22:00:02 +0300 Subject: Some thoughts on Member Patterns as parser developer Message-ID: I read through some messages about member patterns design and accumulated some thoughts to share. It happened so that all of my recent projects were linked to parsing some data:equations, java statements etc., so I have been on close terms with switch statements during tokenization and syntax tree construction. It is common practice in parser development to build either type hierarchies. For older solutions, it is common to just informally "imply" that one type of tokens is a subtype of another and use some field like "kind" as a discriminator. The thing in common in both approaches, is that one "state" of token or ex[ression could be not mutually exclusive to another. Consider following: Token | - KeywordToken | | - ExtendsToken | | - SuperToken | | .... | - LiteralToken | | - NumberToken | | - StringToken ... Also, we have some factory that receives String and returns token: class Tokens { Token parse(String s) { .. } static case pattern parseKeyword(String s) { .. } // hope I understood syntax correctly static case pattern parseLiteral(String s) { .. } static case pattern parseNumberLiteral(String s) { .. } } Now, somewhere in the processing pipeline, I decided to use member pattern matching: switch (str) { case Tokens.parseLiteral(String literalStr) -> ... case Tokens.parseKeyword(String kwdStr) -> ... case null -> ... } At this point, all possible states are already exhausted. However, If i got everything correctly, this won't compile, as the compiler still thinks parseNumberLiteral is not exhausted, while effectively it is. I think such situations could become pretty common. One solution I can think of is to add an option to specify a supercase like static case pattern parseNumberLiteral(String s) extends parseLiteral { .. }. This might seem weird, but, in fact, things like this have been here since the first day of type pattern matching. case String effectively extends case CharSequence and case Object, so I think enabling something like this for custom cases would be reasonable. Also, some cases could accept null as valid value. For example, modifying Tokens class like so: class Tokens { Token parse(String s) { .. } static case pattern parseKeyword(String s) { .. } // hope I understood syntax correctly static case pattern parseLiteral(String s) { .. } static case pattern parseNumberLiteral(String s) { .. } static case pattern nullOrEmpty(String s) { ... } } Now, if we match Tokens.nullOrEmpty(String s), null is also exhausted. Maybe there could be syntax like static case pattern nullOrEmpty(String s) covers null { ... } or something like this. Moreover, null is a special value, so, if we assume there is a nonNull(Object obj) pattern in Objects class, switch like this: switch (someVar) { case Objects.nonNull(Object notNull) -> ... case null -> ... } is also exhaustive for any type. That's my concern about this proposal. I really love this feature, I think it could introduce a completely new way of writing code in Java. However, to do so, I think something like this should be present to make pattern matching as smart and flexible as possible. Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.spangenberg at hotmail.de Sun Apr 28 10:43:59 2024 From: johannes.spangenberg at hotmail.de (Johannes Spangenberg) Date: Sun, 28 Apr 2024 12:43:59 +0200 Subject: Extension methods In-Reply-To: References: Message-ID: Here are a few notes from my side, although I am just a fellow subscriber to the mailing list and not an API designer of the Java project: > 2. Documentation accessibility is a strange point for me to be fair. > Every IDE nowadays is capable of fetching the right documentation, as > well as explicitly mentioning where the method comes from, as it is > done in C#, kotlin, swift and many other languages. I don't?think > anyone has ever heard complaints about poor documentation of LinQ. > Unless someone is writing in notepad, this is poorly applicable. I am regularly annoyed by this issue whenever I read source code at GitHub which uses Kotlin, C++, JavaScript, or on-demand imports in Java. I my view, it is less about finding the documentation, and more about finding the definition of the method. People are not always in their feature-rich IDE when reading source code. > Regarding ambiguity, the common practice (which also is applied in my > design) is to prioritize members over extensions, there is no > ambiguity if behaviour is well defined.This "potentially" could > sometimes result in source incompatibility?with some third-party > libraries, but as soon as this is not conflicting with anything from > stdlib, that's not really our concern. Also, I think it's kind of > exotic scenario, and may crash only with utilities from stdlib > classes, which cuts off virtually all production code from the risk group. Regarding the related risk of collisions, I recently was affected by the following Gradle bug, which is caused by a collision between the stdlib of Java and Kotlin (made possible by extension methods): https://github.com/gradle/gradle/issues/27699 > 3. Not overridable. Should they be? I don't?think there is a way to > achieve some kind of "polymorphic" extensions, and I don't?think there > should be: extension methods should provide polymorphic target > handling, floow LSP etc., not the other way around. Rust uses a single concept (Traits) which effectively supports both. Traits also provide a nice solution for the issues behind Unions. However, while I would like to see something similar to Traits in Java, I would actually cut the extension-method-semantic. (i.e. I would make the methods available only after the object was assigned to the type of the Trait.) For people who don't know Rust, you can think of Traits as interfaces, which can also declare implementations for already existing classes. Here is an attempt to map it to some made-up Java syntax: /* usage */ MyFilesTrait files1 = Path.of("src") MyFilesTrait files2 = List.of(Path.of("src-1"), Path.of("src-2")) /* trait definition */ interface MyFilesTrait { Stream asPathStream(); for-class Path { @Override Stream asPathStream() { return Stream.of(this); } } for-class File { @Override Stream asPathStream() { return Stream.of(this.toPath()); } } for-class Collection { @Override Stream asPathStream() { return this.stream().flatMap(item -> item.asPathStream()) } } } If you would seal the interface, you would effectively have an union. > Its common issue users just aren't?aware of the existence of certain > utility classes and end up with nonoptimal code. This seems mostly related to the auto-completion of the IDE. I don't think the extension functions would be necessary for that, but I agree that extension methods would guide IDEs and kind of force them to implement the discovery during auto-completion. > Code without extension methods is always much more verbose, if the API > is supposed to be fluent developer could end up with deep nested > invocations. > as for "the illusion of belonging" it could be addressed by > introducing some special operator instead of dot to highlight the > difference, e.g. something like: > x?f3("p3")?f2("p2", "p22")?f1("p1") In the JavaScript community, there were some discussions about introducing a "pipe operator" some years ago, but it seems to be stalled right now. https://github.com/tc39/proposal-pipeline-operator > JS also has its own unique way of extending APIs. The only modern > widely-used language that does not have extension methods is Python, > but that only applies to built-ins, custom classes could be extended > as well. If you are referring to the possibility to assign new properties to prototypes (JS) or classes (Python), than I think it is considered bad practice in parts of both communities. From my point of view, extension functions as implemented by Kotlin are not really a good fit for Java. Java and Kotlin have greatly different design philosophies. While Kotlin focuses more on convenience (or ease of use), Java has a much bigger focus and language simplicity. The complex method resolution required for this style of extension functions doesn't seem to fit well for Java in my opinion. Other solutions like the pipeline operator seem like a better fit to me. While they also increase the complexity by introducing a new operator, it keeps the method resolution quite simple and doesn't hide the complexity as done by extension methods. As a result, it also doesn't suffer from the accessibility issues of extension methods. Best Regards, Johannes -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotanolexandr842 at gmail.com Sun Apr 28 11:43:56 2024 From: rotanolexandr842 at gmail.com (Olexandr Rotan) Date: Sun, 28 Apr 2024 14:43:56 +0300 Subject: Extension methods In-Reply-To: References: Message-ID: I would just like to express some of my concerns regarding pipe operator ( |> syntax ). I am not opposing it, and it may be just my aesthetic concerns, but what made me like Java syntax so much is that Java code most of the time looks like word.anotherWord. I am not sure how to explain this correctly, but this always gives off the impression of a homogenous code style: no ** operators from python, no object referencing-dereferencing etc I am kind of afraid that a pipe operator in such environment could feel alien to users. >From what I managed to find in the net, maybe code style convention that Elixir provides could partially fix that: it recommends to add whitespace before and after pipe like so: other_function() |> new_function() |> baz() |> bar() |> foo(). This code is readable and, imo, will not catch reader eyes so much when they are reading the code. Regarding traits, I am not an expert in functional programming, but what catches my eye right away is that, as I understand, this approach is incompatible with var keyword. On Sun, Apr 28, 2024 at 2:06?PM Johannes Spangenberg < johannes.spangenberg at hotmail.de> wrote: > Here are a few notes from my side, although I am just a fellow subscriber > to the mailing list and not an API designer of the Java project: > > 2. Documentation accessibility is a strange point for me to be fair. Every > IDE nowadays is capable of fetching the right documentation, as well as > explicitly mentioning where the method comes from, as it is done in C#, > kotlin, swift and many other languages. I don't think anyone has ever heard > complaints about poor documentation of LinQ. Unless someone is writing in > notepad, this is poorly applicable. > > I am regularly annoyed by this issue whenever I read source code at GitHub > which uses Kotlin, C++, JavaScript, or on-demand imports in Java. I my > view, it is less about finding the documentation, and more about finding > the definition of the method. People are not always in their feature-rich > IDE when reading source code. > > Regarding ambiguity, the common practice (which also is applied in my > design) is to prioritize members over extensions, there is no ambiguity if > behaviour is well defined.This "potentially" could sometimes result in > source incompatibility with some third-party libraries, but as soon as this > is not conflicting with anything from stdlib, that's not really our > concern. Also, I think it's kind of exotic scenario, and may crash only > with utilities from stdlib classes, which cuts off virtually all production > code from the risk group. > > Regarding the related risk of collisions, I recently was affected by the > following Gradle bug, which is caused by a collision between the stdlib of > Java and Kotlin (made possible by extension methods): > > https://github.com/gradle/gradle/issues/27699 > > 3. Not overridable. Should they be? I don't think there is a way to > achieve some kind of "polymorphic" extensions, and I don't think there > should be: extension methods should provide polymorphic target handling, > floow LSP etc., not the other way around. > > Rust uses a single concept (Traits) which effectively supports both. > Traits also provide a nice solution for the issues behind Unions. However, > while I would like to see something similar to Traits in Java, I would > actually cut the extension-method-semantic. (i.e. I would make the methods > available only after the object was assigned to the type of the Trait.) > > For people who don't know Rust, you can think of Traits as interfaces, > which can also declare implementations for already existing classes. Here > is an attempt to map it to some made-up Java syntax: > > /* usage */ > MyFilesTrait files1 = Path.of("src") > MyFilesTrait files2 = List.of(Path.of("src-1"), Path.of("src-2")) > > /* trait definition */ > interface MyFilesTrait { > > Stream asPathStream(); > > for-class Path { > @Override > Stream asPathStream() { > return Stream.of(this); > } > } > > for-class File { > @Override > Stream asPathStream() { > return Stream.of(this.toPath()); > } > } > > for-class Collection { > @Override > Stream asPathStream() { > return this.stream().flatMap(item -> item.asPathStream()) > } > } > } > > If you would seal the interface, you would effectively have an union. > > Its common issue users just aren't aware of the existence of certain > utility classes and end up with nonoptimal code. > > This seems mostly related to the auto-completion of the IDE. I don't think > the extension functions would be necessary for that, but I agree that > extension methods would guide IDEs and kind of force them to implement the > discovery during auto-completion. > > Code without extension methods is always much more verbose, if the API is > supposed to be fluent developer could end up with deep nested invocations. > > as for "the illusion of belonging" it could be addressed by introducing > some special operator instead of dot to highlight the difference, e.g. > something like: > x?f3("p3")?f2("p2", "p22")?f1("p1") > > In the JavaScript community, there were some discussions about introducing > a "pipe operator" some years ago, but it seems to be stalled right now. > > https://github.com/tc39/proposal-pipeline-operator > > JS also has its own unique way of extending APIs. The only modern > widely-used language that does not have extension methods is Python, but > that only applies to built-ins, custom classes could be extended as well. > > If you are referring to the possibility to assign new properties to > prototypes (JS) or classes (Python), than I think it is considered bad > practice in parts of both communities. > > From my point of view, extension functions as implemented by Kotlin are > not really a good fit for Java. Java and Kotlin have greatly different > design philosophies. While Kotlin focuses more on convenience (or ease of > use), Java has a much bigger focus and language simplicity. The complex > method resolution required for this style of extension functions doesn't > seem to fit well for Java in my opinion. Other solutions like the pipeline > operator seem like a better fit to me. While they also increase the > complexity by introducing a new operator, it keeps the method resolution > quite simple and doesn't hide the complexity as done by extension methods. > As a result, it also doesn't suffer from the accessibility issues of > extension methods. > > Best Regards, > Johannes > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.spangenberg at hotmail.de Sun Apr 28 12:25:11 2024 From: johannes.spangenberg at hotmail.de (Johannes Spangenberg) Date: Sun, 28 Apr 2024 14:25:11 +0200 Subject: Extension methods In-Reply-To: References: Message-ID: > I would just like to express some of my concerns regarding pipe > operator ( |> syntax ). I agree that the pipe operator proposed for JavaScript feels a bit alien. It also doesn't mix well with normal method calls. So, I also wouldn't take it as is. I just thought it might be interesting that there were related discussions in another community already, as they might get relevant if such solution would be considered for Java. > Regarding?traits, I am not an expert in functional programming, but > what catches my eye right away is that, as I understand, this approach > is incompatible with var keyword. While I was just assigning the objects to a variable, I think that is not a common use case. It was just the most simple use case I could think of, which demonstrates the behavior. The var-keyword would continue to resolve to the most specific type known, i.e. Path and List. (Just as the `var` in `var x = new ArrayList()` resolves into ArrayList, not List.) I think the most common use cases for Traits, just as for unions, would be in parameters. For example, consider the Gradle API of Project.files(Object...). https://docs.gradle.org/current/javadoc/org/gradle/api/Project.html#files-java.lang.Object...- Anyway, also note that Traits would probably deserve there own discussion, if actually considered. I just wanted to mention them because when implemented as Rust, they provide a solution for extension methods which can be considered superior to the solutions of Kotlin and some other languages. However, I am not actually a fan of extension methods in Java, as mentioned in my previous email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sat Apr 27 11:41:33 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Sat, 27 Apr 2024 07:41:33 -0400 Subject: Extension methods In-Reply-To: References: Message-ID: <1e724c13-1be3-44a4-83d2-3b01d38a13be@oracle.com> On 4/26/2024 10:40 AM, ??-24 ????????? ?????? wrote: > > One thing I have to?admit, is that they turned out much less usable > then it seemed to be for me. In fact, I found a major blind spot that > could be covered by their common design in most languages: they are > virtually incompatible with DI containers due to their static nature. Indeed, the staticness of extension methods is part of what makes them problematic in many ways.? And you discovered in your exploration that in order to "fix" this, you have to bring in the complexity of another, different kind of override / lookup mechanism (not unlike implicits in Scala.) From brian.goetz at oracle.com Mon Apr 29 12:09:25 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 29 Apr 2024 08:09:25 -0400 Subject: Some thoughts on Member Patterns as parser developer In-Reply-To: References: Message-ID: <6bbee91d-6a56-4a4a-8bff-0fbe7a3b4c67@oracle.com> On 4/26/2024 3:00 PM, Olexandr Rotan wrote: > I read through some messages about member patterns design and > accumulated some thoughts to share. > > It happened so that all of my recent projects were linked to parsing > some data:equations, java statements etc., so I have been on close > terms with switch statements during?tokenization and syntax tree > construction. > > It is common practice in parser development to build either type > hierarchies. For older solutions, it is common to just informally > "imply" that one type of tokens is a subtype of another and use some > field like "kind" as a discriminator. The thing in common in both > approaches, is that one "state" of token or ex[ression could be not > mutually exclusive to another. Consider following: > > Token > | - KeywordToken > | | - ExtendsToken > | | - SuperToken > | | .... > | - LiteralToken > | | - NumberToken > | | - StringToken > ... > Also, we have some factory that receives?String and returns token: > class Tokens { > ? ? ? ? ?Token parse(String s) { .. } > ? ? ? ? ?static case pattern parseKeyword(String s) { .. } // hope I > understood syntax correctly > ? ? ? ? ?static case pattern parseLiteral(String s) { .. } > ? ? ? ? ?static case pattern parseNumberLiteral(String s) { .. } > } I am not sure what semantics you have in mind, but given the name (`parseXxx`), you might be on very dangerous territory here.? If these parse methods are intended to actually mutate the internal state of the parser, then declaring them as patterns is going to be pretty wrong.? Patterns are not merely "conditional methods with multiple return".? So its quite possible this whole question rests on a faulty base. Your patterns are missing the declaration of what the match candidate is, so I will assume that these are patterns whose match candidate is Token.? You also don't show whether Token actually has a hierarchy like you describe, or whether that is only on the whiteboard. > Now, somewhere in?the processing pipeline, I decided?to use member > pattern matching: > switch (str) { > ? ? ? ? ? case Tokens.parseLiteral(String literalStr) -> ... > ? ? ? ? ? case Tokens.parseKeyword(String kwdStr) -> ... > ? ? ? ? ? case null -> ... > } > > At this point, all possible states are already exhausted. However, If > i got everything correctly, this won't compile, as the compiler still > thinks parseNumberLiteral is not exhausted, while effectively it is. Why is it effectively exhausted here?? What hidden source of exhaustiveness information is there in your hiearchy?? And, could it be made explicit? There are many questions here, but my strong impression here is that you are using the wrong tool in a few places, and so not surprisingly are at a dead end. One way to make this all explicit would be: ??? sealed interface Token { ... } ??? sealed interface KeywordToken extends Token { ... } ??? sealed interface LiteralToken extends Token { ... } ??? // records for each token type Now, you would get record patterns for free (case LiteralToken(...)) and exhaustiveness for free also.? And if you were switching only on a LiteralToken, you would only have to cover the subtypes of LIteralToken. > I think such situations could become pretty common. One solution I can > think of is to add an option to specify a supercase like? static case > pattern parseNumberLiteral(String s) extends parseLiteral { .. }. I think we're pretty far from identifying the problem, so probably best to hold off suggesting solutions at this point.Let's back up and expose the assumptions here first. From rotanolexandr842 at gmail.com Mon Apr 29 14:02:57 2024 From: rotanolexandr842 at gmail.com (Olexandr Rotan) Date: Mon, 29 Apr 2024 17:02:57 +0300 Subject: Some thoughts on Member Patterns as parser developer In-Reply-To: <6bbee91d-6a56-4a4a-8bff-0fbe7a3b4c67@oracle.com> References: <6bbee91d-6a56-4a4a-8bff-0fbe7a3b4c67@oracle.com> Message-ID: I think I did a really poor job expressing my thoughts in the first message. I will try to be more clear now, along with some situations I have encountered myself. Assume we have a stateless math expressions parser. For simplicity, let's assume that we split expressions into several types: "computation-heavy", "computation-lightweight", "erroneous" (permits null as input) and "computation-remote" (delegate parsing to another service via some communication protocol), and types can be assigned heuristically (speculatively). For some tasks, we need to perform some pre- and postprocessing around core parsing logic, like result caching, wrapping parsing into virtual thread and registering it in phaser etc., for others - log warning or error, or fail parsing with exception. These types could be considered "abstract types", because they are "bins" that real token types fall into, exclusively into one. Lets omit class hierarchies this time, we will be good just under assumption that null value is considered erroneous token, and polynomial considered computation-heavy. Now, in parser class, we define following patterns: static case pattern matchesHeavy(String s) { .. } static case pattern matchesLight(String s) { .. } static case pattern matchesErroneous(String s) { .. } // covers null and unresolved static case pattern matchesRemote(String s) { .. } static case pattern matchesPolynomial(String s) { .. } // considered heavy // other patterns that are either subcases of mentioned or aren't for String type... Note: last time I have been kind of misled by an example I have found in jdk mails with Integer.parseInt pattern, when effectively what I meant was matching pattern, not parsing. Now, we decide to use switch pattern matching when building a syntax tree like so: switch (expression /*expression is of type String*/) { case matchesHeavy(String heavy) -> ... case matchesLight(String heavy) -> ... case matchesRemote(String heavy) -> ... case matchesErroneous(String heavy) -> ... // covers null and any unresolved (default brach) } Now, what we see is that effectively, every possible pattern for String type (I make an emphasis here and will return to this later) is covered, including null and default. But, as I understand, the following won't compile. As I see, there are two layers for this problem, one of which was not even mentioned by me in the previous letter, Layer 1 (not mentioned in first letter): patterns are exhausted ONLY for String type (and types assignable to it if there were such). If we are trying to match Object-typed value, patterns are not exhausted. You were really right to note that there weren't any real sources of exhaustiveness in my initial letter. This leads me to a thought that to declare a finite states list for some type, there will have to be introduced some kind of companion object that lists all possible states for some type, or at least I haven't come up with something better. This kind of state object would be a really complex concept to introduce (complex for users in the first place), however, this can also be a very powerful tool in the right hands. Layer 2: even if there are some "patterns object" or any workaround, it will contain matchesPolynomial pattern. This pattern is a more specific version of matchesHeavy pattern, and therefore, covered by it. However, if we leave it as it is now, the compiler will think this branch is not implemented, and, therefore, will demand the default branch in switch, even if it is effectively unreachable. This is what I have tried to propose fix for with this matchesPolynomial extends matchesHeavy syntax. Also, matchesErroneous covers null (that is what i referred to in "covers null" syntax section), and default is covered too (but I don't think matchesErroneous covers default should be a thing). Also, there could be patterns that cover any non-null value (like potential Objects.nonNull), and the example I have provided in the first letter demonstrates that Objects.nonNull and null patterns are also exhausting for any non-primitive object. I hope now my thoughts are more clear. Summarizing, what I want to tell by this thoughts is that there are potential to make member patterns much more powerful if there would be a way to a) assert that there are a finite amount of patterns for type and each value falls at least in one and b) declare that one pattern is subcase of another pattern. This is for sure would be complex for understanding of users, but could potentially offer a very powerful instrument in return. I'm sure that you could think of countless ways to apply this in various projects. Regards On Mon, Apr 29, 2024 at 3:09?PM Brian Goetz wrote: > > > On 4/26/2024 3:00 PM, Olexandr Rotan wrote: > > I read through some messages about member patterns design and > > accumulated some thoughts to share. > > > > It happened so that all of my recent projects were linked to parsing > > some data:equations, java statements etc., so I have been on close > > terms with switch statements during tokenization and syntax tree > > construction. > > > > It is common practice in parser development to build either type > > hierarchies. For older solutions, it is common to just informally > > "imply" that one type of tokens is a subtype of another and use some > > field like "kind" as a discriminator. The thing in common in both > > approaches, is that one "state" of token or ex[ression could be not > > mutually exclusive to another. Consider following: > > > > Token > > | - KeywordToken > > | | - ExtendsToken > > | | - SuperToken > > | | .... > > | - LiteralToken > > | | - NumberToken > > | | - StringToken > > ... > > Also, we have some factory that receives String and returns token: > > class Tokens { > > Token parse(String s) { .. } > > static case pattern parseKeyword(String s) { .. } // hope I > > understood syntax correctly > > static case pattern parseLiteral(String s) { .. } > > static case pattern parseNumberLiteral(String s) { .. } > > } > > I am not sure what semantics you have in mind, but given the name > (`parseXxx`), you might be on very dangerous territory here. If these > parse methods are intended to actually mutate the internal state of the > parser, then declaring them as patterns is going to be pretty wrong. > Patterns are not merely "conditional methods with multiple return". So > its quite possible this whole question rests on a faulty base. > > Your patterns are missing the declaration of what the match candidate > is, so I will assume that these are patterns whose match candidate is > Token. You also don't show whether Token actually has a hierarchy like > you describe, or whether that is only on the whiteboard. > > > Now, somewhere in the processing pipeline, I decided to use member > > pattern matching: > > switch (str) { > > case Tokens.parseLiteral(String literalStr) -> ... > > case Tokens.parseKeyword(String kwdStr) -> ... > > case null -> ... > > } > > > > At this point, all possible states are already exhausted. However, If > > i got everything correctly, this won't compile, as the compiler still > > thinks parseNumberLiteral is not exhausted, while effectively it is. > > Why is it effectively exhausted here? What hidden source of > exhaustiveness information is there in your hiearchy? And, could it be > made explicit? > > There are many questions here, but my strong impression here is that you > are using the wrong tool in a few places, and so not surprisingly are at > a dead end. > > One way to make this all explicit would be: > > sealed interface Token { ... } > sealed interface KeywordToken extends Token { ... } > sealed interface LiteralToken extends Token { ... } > // records for each token type > > Now, you would get record patterns for free (case LiteralToken(...)) and > exhaustiveness for free also. And if you were switching only on a > LiteralToken, you would only have to cover the subtypes of LIteralToken. > > > I think such situations could become pretty common. One solution I can > > think of is to add an option to specify a supercase like static case > > pattern parseNumberLiteral(String s) extends parseLiteral { .. }. > > I think we're pretty far from identifying the problem, so probably best > to hold off suggesting solutions at this point.Let's back up and expose > the assumptions here first. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gavin.bierman at oracle.com Mon Apr 29 14:50:32 2024 From: gavin.bierman at oracle.com (Gavin Bierman) Date: Mon, 29 Apr 2024 14:50:32 +0000 Subject: JEP 468 - binary compatibility clarifications In-Reply-To: References: Message-ID: Hi Atilla, If you are asking about the binary compatibility of record classes, you can read the JLS: 13.4.27 Evolution of Record Classes Adding, deleting, changing, or reordering record components in a record class may break compatibility with pre-existing binaries that are not recompiled; such a change is not recommended for widely distributed record classes. More precisely, adding, deleting, changing, or reordering record components may change the corresponding implicit declarations of component fields and accessor methods, as well as changing the signature and implementation of the canonical constructor and other supporting methods, with consequences specified in ?13.4.8 and ?13.4.12. In all other respects, the binary compatibility rules for record classes are identical to those for normal classes. Gavin On 25 Apr 2024, at 22:39, Attila Kelemen wrote: Hi, Reading the JEP I'm unsure about some behavior when the record is recompiled with an additional component, while the code containing the `with` block is not. Can this be clarified in the JEP? Or is this undefined for now? For better understanding, let me write down a specific example: MyRecord.java (v1) ``` public record MyRecord(int x, int y) { } ``` MyRecord.java (v2) ``` public record MyRecord(int x, int y, int z) { } ``` MyRecord.java (v3) ``` public record MyRecord(int x, int y, int z) { public MyRecord(int x, int y) { this(x, y, 0); } } ``` Let's suppose we have the following class which we compile against v1 of `MyRecord`, and then never recompile: ``` public class MyClass { public static MyRecord adjustX(MyRecord src) { return src with { x = 9; } } } ``` Now my question is: What is the output of the following code? ``` System.out.println(MyClass.adjustX(new MyRecord(1, 2, 3)).z()); ``` A, When running with v2 of `MyRecord` on the classpath B, When running with v3 of `MyRecord` on the classpath. Thanks, Attila -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotanolexandr842 at gmail.com Mon Apr 29 14:51:17 2024 From: rotanolexandr842 at gmail.com (Olexandr Rotan) Date: Mon, 29 Apr 2024 17:51:17 +0300 Subject: Some thoughts on Member Patterns as parser developer In-Reply-To: References: <6bbee91d-6a56-4a4a-8bff-0fbe7a3b4c67@oracle.com> Message-ID: I also would like to add some clarification on what problem I am trying to point out. Many could think I am just fighting null and default branches, but this is just a part of the whole picture. !, Null and default branches: yes, the most obvious. I think I had talked about this part enough to not repeat once again. Here I just want to add that such redundant branches could be much more damaging for readability than it seems to be at first glance. Besides that this dead code is just annoying (if I'm not mistaken, I'm not the first one here to say that), person could spend hours debugging trying to figure out why null value doesn't reach null branch, not knowing that some pattern actually accepts null and makes null branch unreachable. Not every code reader is familiar with code internals to know that there is no way for the default branch to be reached because all possible patterns are listed, or that one pattern is a subcase of another. 2. Else-if branches: without a way to tell the compiler there is a finite number of patterns for value, it may think that not all possible if branches lead to a either return or throw statements. I think it could be common to want "processed by pattern" value in each if branch, however body of if statement with pattern as condition will be recognized as only one of two possible branches unless there is a way to assert that after trying to match value with all possible patterns, at least one of each statements bodies will be executed. 3. JIT optimizations: I am not really familiar with bytecode-level implementation of switch with patterns, but creating a way to declare hierarchy of patterns or waying that one pattern accepts null would for sure help JIT optimize unreachable branches On Mon, Apr 29, 2024 at 5:02?PM Olexandr Rotan wrote: > I think I did a really poor job expressing my thoughts in the first > message. I will try to be more clear now, along with some situations I have > encountered myself. > > Assume we have a stateless math expressions parser. For simplicity, let's > assume that we split expressions into several types: "computation-heavy", > "computation-lightweight", "erroneous" (permits null as input) and > "computation-remote" (delegate parsing to another service via some > communication protocol), and types can be assigned heuristically > (speculatively). For some tasks, we need to perform some pre- and > postprocessing around core parsing logic, like result caching, wrapping > parsing into virtual thread and registering it in phaser etc., for others - > log warning or error, or fail parsing with exception. > > These types could be considered "abstract types", because they are "bins" > that real token types fall into, exclusively into one. Lets omit class > hierarchies this time, we will be good just under assumption that null > value is considered erroneous token, and polynomial considered > computation-heavy. > > Now, in parser class, we define following patterns: > > static case pattern matchesHeavy(String s) { .. } > static case pattern matchesLight(String s) { .. } > static case pattern matchesErroneous(String s) { .. } // covers null and > unresolved > static case pattern matchesRemote(String s) { .. } > static case pattern matchesPolynomial(String s) { .. } // considered heavy > // other patterns that are either subcases of mentioned or aren't for > String type... > > Note: last time I have been kind of misled by an example I have found in > jdk mails with Integer.parseInt pattern, when effectively what I meant was > matching pattern, not parsing. > > Now, we decide to use switch pattern matching when building a syntax tree > like so: > > switch (expression /*expression is of type String*/) { > case matchesHeavy(String heavy) -> ... > case matchesLight(String heavy) -> ... > case matchesRemote(String heavy) -> ... > case matchesErroneous(String heavy) -> ... // covers null and any > unresolved (default brach) > } > > Now, what we see is that effectively, every possible pattern for String > type (I make an emphasis here and will return to this later) is covered, > including null and default. But, as I understand, the following won't > compile. > > As I see, there are two layers for this problem, one of which was not even > mentioned by me in the previous letter, > > Layer 1 (not mentioned in first letter): patterns are exhausted ONLY for > String type (and types assignable to it if there were such). If we are > trying to match Object-typed value, patterns are not exhausted. You were > really right to note that there weren't any real sources of exhaustiveness > in my initial letter. This leads me to a thought that to declare a finite > states list for some type, there will have to be introduced some kind of > companion object that lists all possible states for some type, or at least > I haven't come up with something better. This kind of state object would be > a really complex concept to introduce (complex for users in the first > place), however, this can also be a very powerful tool in the right hands. > > Layer 2: even if there are some "patterns object" or any workaround, it > will contain matchesPolynomial pattern. This pattern is a more specific > version of matchesHeavy pattern, and therefore, covered by it. However, if > we leave it as it is now, the compiler will think this branch is not > implemented, and, therefore, will demand the default branch in switch, even > if it is effectively unreachable. This is what I have tried to propose fix > for with this matchesPolynomial extends matchesHeavy syntax. Also, > matchesErroneous covers null (that is what i referred to in "covers null" > syntax section), and default is covered too (but I don't think > matchesErroneous covers default should be a thing). Also, there could be > patterns that cover any non-null value (like potential Objects.nonNull), > and the example I have provided in the first letter demonstrates that > Objects.nonNull and null patterns are also exhausting for any non-primitive > object. > > I hope now my thoughts are more clear. Summarizing, what I want to tell by > this thoughts is that there are potential to make member patterns much more > powerful if there would be a way to a) assert that there are a finite > amount of patterns for type and each value falls at least in one and b) > declare that one pattern is subcase of another pattern. This is for sure > would be complex for understanding of users, but could potentially offer a > very powerful instrument in return. I'm sure that you could think of > countless ways to apply this in various projects. > > Regards > > > On Mon, Apr 29, 2024 at 3:09?PM Brian Goetz > wrote: > >> >> >> On 4/26/2024 3:00 PM, Olexandr Rotan wrote: >> > I read through some messages about member patterns design and >> > accumulated some thoughts to share. >> > >> > It happened so that all of my recent projects were linked to parsing >> > some data:equations, java statements etc., so I have been on close >> > terms with switch statements during tokenization and syntax tree >> > construction. >> > >> > It is common practice in parser development to build either type >> > hierarchies. For older solutions, it is common to just informally >> > "imply" that one type of tokens is a subtype of another and use some >> > field like "kind" as a discriminator. The thing in common in both >> > approaches, is that one "state" of token or ex[ression could be not >> > mutually exclusive to another. Consider following: >> > >> > Token >> > | - KeywordToken >> > | | - ExtendsToken >> > | | - SuperToken >> > | | .... >> > | - LiteralToken >> > | | - NumberToken >> > | | - StringToken >> > ... >> > Also, we have some factory that receives String and returns token: >> > class Tokens { >> > Token parse(String s) { .. } >> > static case pattern parseKeyword(String s) { .. } // hope I >> > understood syntax correctly >> > static case pattern parseLiteral(String s) { .. } >> > static case pattern parseNumberLiteral(String s) { .. } >> > } >> >> I am not sure what semantics you have in mind, but given the name >> (`parseXxx`), you might be on very dangerous territory here. If these >> parse methods are intended to actually mutate the internal state of the >> parser, then declaring them as patterns is going to be pretty wrong. >> Patterns are not merely "conditional methods with multiple return". So >> its quite possible this whole question rests on a faulty base. >> >> Your patterns are missing the declaration of what the match candidate >> is, so I will assume that these are patterns whose match candidate is >> Token. You also don't show whether Token actually has a hierarchy like >> you describe, or whether that is only on the whiteboard. >> >> > Now, somewhere in the processing pipeline, I decided to use member >> > pattern matching: >> > switch (str) { >> > case Tokens.parseLiteral(String literalStr) -> ... >> > case Tokens.parseKeyword(String kwdStr) -> ... >> > case null -> ... >> > } >> > >> > At this point, all possible states are already exhausted. However, If >> > i got everything correctly, this won't compile, as the compiler still >> > thinks parseNumberLiteral is not exhausted, while effectively it is. >> >> Why is it effectively exhausted here? What hidden source of >> exhaustiveness information is there in your hiearchy? And, could it be >> made explicit? >> >> There are many questions here, but my strong impression here is that you >> are using the wrong tool in a few places, and so not surprisingly are at >> a dead end. >> >> One way to make this all explicit would be: >> >> sealed interface Token { ... } >> sealed interface KeywordToken extends Token { ... } >> sealed interface LiteralToken extends Token { ... } >> // records for each token type >> >> Now, you would get record patterns for free (case LiteralToken(...)) and >> exhaustiveness for free also. And if you were switching only on a >> LiteralToken, you would only have to cover the subtypes of LIteralToken. >> >> > I think such situations could become pretty common. One solution I can >> > think of is to add an option to specify a supercase like static case >> > pattern parseNumberLiteral(String s) extends parseLiteral { .. }. >> >> I think we're pretty far from identifying the problem, so probably best >> to hold off suggesting solutions at this point.Let's back up and expose >> the assumptions here first. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Mon Apr 29 15:27:32 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 29 Apr 2024 11:27:32 -0400 Subject: Some thoughts on Member Patterns as parser developer In-Reply-To: References: <6bbee91d-6a56-4a4a-8bff-0fbe7a3b4c67@oracle.com> Message-ID: <08f2bf28-02b0-4b3d-95fe-0dd562abc849@oracle.com> On 4/29/2024 10:02 AM, Olexandr Rotan wrote: > I think I did a really poor job expressing my thoughts in the first > message. I will try to be more clear now, along with some situations I > have encountered myself. > > Assume we have a stateless math expressions parser. For simplicity, > let's assume that we split expressions into several types: > "computation-heavy", "computation-lightweight", "erroneous" (permits > null as input) and "computation-remote" (delegate parsing to another > service via some communication protocol), and types can be assigned > heuristically (speculatively). For some tasks, we need to perform some > pre- and postprocessing around?core parsing logic, like result > caching, wrapping parsing into virtual thread and registering it in > phaser etc., for others - log warning or error, or fail parsing with > exception. If you are envisioning side-effects, then I think you are already abusing patterns.? Patterns either match, or they don't, and if they do, they may produce bindings to describe witnesses to the match. Exceptions are not available to you as a normal means of "something went wrong"; in writing patterns (you should think of throwing an exception from a pattern declaration as being only a few percent less drastic than calling System.exit()). Patterns, as explained in the various writeups, are the dual (deconstruction) of certain methods (aggregations.)?? I don't see the duality here.? (If I had to guess, you're trying to bootstrap your way to parser combinators with patterns, but that's a different feature.)? So I'm still not sure that you're using patterns right, so I still want to focus on that before we start inventing new features. Case patterns are a form of ad-hoc exhaustiveness, to be used only when other sources of exhaustiveness (e.g., enums, sealed types, ADTs) fail.? It isn't clear to me yet that these other sources have failed. From brian.goetz at oracle.com Mon Apr 29 15:32:16 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Mon, 29 Apr 2024 11:32:16 -0400 Subject: JEP 468 - binary compatibility clarifications In-Reply-To: References: Message-ID: <533e722e-558f-4665-b289-98a2c77faaef@oracle.com> Gavin answered for the spec; I'll answer now for the implementation. The current implementation will try to do an `invokespecial` of the (int, int) constructor, which will succeed with v1 and v3, but not v2.? It will also "inline" the (int, int) record pattern (shredding it into an instanceof test, and calls to accessors), which, once compiled, will work against v1/v2/v3. When we get to reconstruction expressions for ordinary classes, the pattern part will work like the constructor part today, assuming that you declared an explicit deconstructor in v3. On 4/25/2024 5:39 PM, Attila Kelemen wrote: > Hi, > > Reading the JEP I'm unsure about some behavior when the record is > recompiled with an additional component, while the code containing the > `with` block is not. Can this be clarified in the JEP? Or is this > undefined for now? > > For better understanding, let me write down a specific example: > > MyRecord.java (v1) > ``` > public record MyRecord(int x, int y) { } > ``` > > MyRecord.java (v2) > ``` > public record MyRecord(int x, int y, int z) { } > ``` > > MyRecord.java (v3) > ``` > public record MyRecord(int x, int y, int z) { > ? public MyRecord(int x, int y) { this(x, y, 0); } > } > ``` > > Let's suppose we have the following class which we compile against v1 > of `MyRecord`, and then never recompile: > > ``` > public class MyClass { > ? public static MyRecord adjustX(MyRecord src) { > ? ? return src with { x = 9; } > ? } > } > ``` > > Now my question is: What is the output of the following code? > > ``` > System.out.println(MyClass.adjustX(new MyRecord(1, 2, 3)).z()); > ``` > > A, When running with v2 of `MyRecord` on the classpath > B, When running with v3 of `MyRecord` on the classpath. > > Thanks, > Attila > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eirbjo at gmail.com Tue Apr 30 06:13:50 2024 From: eirbjo at gmail.com (=?UTF-8?B?RWlyaWsgQmrDuHJzbsO4cw==?=) Date: Tue, 30 Apr 2024 08:13:50 +0200 Subject: JEP-468: Derived record creation in record methods with parameter Message-ID: Hi, JEP-468 briefly mentions that derived record expressions can also be used inside the record class: Derived record creation expressions can also be used inside record classes > to simplify the implementation of basic operations: record Complex(double re, double im) { > Complex conjugate() { return this with { im = -im; }; } > Complex realOnly() { return this with { im = 0; }; } > Complex imOnly() { return this with { re = 0; }; } > } I'm trying to understand what happens when a derived expression is used in a record method taking a parameter with the same name as one of the record components: record Complex(double re, double im) { Complex withReal(double re) { return this with { re = ?; }; } } Can "this" be used inside the transformation block, like the following? Complex withReal(double re) { return this with { this.re = re; }; } I know this particular example is a bit silly, but I think it may be useful to combine multiple transformations in a single record method, also when the method takes one or more parameters. People will be running into this. Perhaps this could be clarified in the JEP with an example or does it need clarification in the JLS? Or is it just me missing something, as usual? :-) Thanks, Eirik. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eirbjo at gmail.com Tue Apr 30 06:41:27 2024 From: eirbjo at gmail.com (=?UTF-8?B?RWlyaWsgQmrDuHJzbsO4cw==?=) Date: Tue, 30 Apr 2024 08:41:27 +0200 Subject: JEP-468: Derived record creation in record methods with parameter In-Reply-To: References: Message-ID: On Tue, Apr 30, 2024 at 8:13?AM Eirik Bj?rsn?s wrote: > Complex withReal(double re) { return this with { this.re = re; }; } > > Although perhaps it would be strange if "this" refers to the origin record in the origin expression, then switches to refer to the derived record in the transformation statement? In a plain old setter, we could use "this.re = re" as an escape hatch to avoid shadowing of parameters, but if "this doesn't work like that" inside a transformation block, perhaps our only way out is to rename parameters/locals to avoid shadowing? Eirik. -------------- next part -------------- An HTML attachment was scrubbed... URL: From attila.kelemen85 at gmail.com Tue Apr 30 09:19:19 2024 From: attila.kelemen85 at gmail.com (Attila Kelemen) Date: Tue, 30 Apr 2024 11:19:19 +0200 Subject: JEP 468 - binary compatibility clarifications In-Reply-To: <533e722e-558f-4665-b289-98a2c77faaef@oracle.com> References: <533e722e-558f-4665-b289-98a2c77faaef@oracle.com> Message-ID: Thanks for both the answers. So, to summarize: JEP 468 leaves this behavior undefined (given the JLS saying "may break"), but for now it will work as if I did things manually. Though the spec leaves open the possibility of an indy using implementation in the future. Brian Goetz ezt ?rta (id?pont: 2024. ?pr. 29., H, 17:32): > Gavin answered for the spec; I'll answer now for the implementation. > > The current implementation will try to do an `invokespecial` of the (int, > int) constructor, which will succeed with v1 and v3, but not v2. It will > also "inline" the (int, int) record pattern (shredding it into an > instanceof test, and calls to accessors), which, once compiled, will work > against v1/v2/v3. > > When we get to reconstruction expressions for ordinary classes, the > pattern part will work like the constructor part today, assuming that you > declared an explicit deconstructor in v3. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Tue Apr 30 12:43:10 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Tue, 30 Apr 2024 08:43:10 -0400 Subject: JEP-468: Derived record creation in record methods with parameter In-Reply-To: References: Message-ID: The meaning of names inside the block is the same as outside, with the addition of the synthetic component locals introduced by the `with` construct (which may shadow outer variables.)? So here `this` refers to the method on which `withReal` was invoked. Of course, your example fails because `this.re` is a final field and assigning to it is not allowed. On 4/30/2024 2:13 AM, Eirik Bj?rsn?s wrote: > Hi, > > JEP-468 briefly mentions that derived record expressions can also be > used inside the record class: > > Derived record creation expressions can also be used inside record > classes to simplify the implementation of basic operations: > > record Complex(double re, double im) { > ? ? Complex conjugate() { return this with { im = -im; }; } > ? ? Complex realOnly() ?{ return this with { im = 0; }; } > ? ? Complex imOnly() ? ?{ return this with { re = 0; }; } > } > > > I'm trying to understand what happens when a derived expression is > used in a record method taking a parameter with the same name as one > of the record components: > > record Complex(double re, double im) { > ? ? Complex withReal(double re) { return this with { re = ?; }; } > } > > Can "this" be used inside the transformation block, like the following? > > ? Complex withReal(double re) { return this with { this.re > = re; }; } > > > I know this particular example is a bit silly, but I think it may be > useful?to combine multiple transformations in a single record method, > also when the method takes one or more parameters. People will be > running into this. > > Perhaps this could be clarified in the JEP with an example or does it > need?clarification in the JLS? Or is it just me missing something, as > usual? :-) > > Thanks, > Eirik. -------------- next part -------------- An HTML attachment was scrubbed... URL: