From duke at openjdk.org Mon Jan 1 20:49:02 2024 From: duke at openjdk.org (duke) Date: Mon, 1 Jan 2024 20:49:02 GMT Subject: [lworld] Withdrawn: 8304168: [lworld] CDS tests fail with --enable-preview patched value classes In-Reply-To: References: Message-ID: On Tue, 14 Mar 2023 18:50:11 GMT, Roger Riggs wrote: > When --enable-preview is true, the patching of some java.base classes as value classes disables CDS. > Subsequently the CDS tests fail. > > For a CDS, disable Valhalla when --enable-preview is set so java.base is not patched with value classes. This pull request has been closed without being integrated. ------------- PR: https://git.openjdk.org/valhalla/pull/832 From duke at openjdk.org Mon Jan 1 20:50:15 2024 From: duke at openjdk.org (duke) Date: Mon, 1 Jan 2024 20:50:15 GMT Subject: [lworld] Withdrawn: 8267763: [lworld][lw3] Change "non-tearable" nomenclature to "access atomic" In-Reply-To: References: Message-ID: On Wed, 26 May 2021 07:39:17 GMT, Aleksey Shipilev wrote: > Current Valhalla code has the experimental marker interface `java.lang.NonTearable`, which is actually about access atomicity. It makes weird claims about word tearing and out-of-thin air values. > > First, this is not word tearing. Word tearing, as defined by JLS 17.6 is: _"This problem is sometimes known as word tearing, and on processors that cannot easily update a single byte in isolation some other approach will be required"._ That is, word tearing is when we cannot update the _narrow_ member without doing a _wider_ access, thus necessarily affecting the adjacent members. In Valhalla case, what we are dealing with is access atomicity: we sometimes cannot access the _wide_ member without doing a set of _narrower_ accesses. This is why JLS 17.7 says "non-atomic treatment of double and longs", not "word-tearing of double and longs". > > Second, the docs for `j.l.NonTearable` mention "out-of-thin-air" (OOTA) values, which are not related here at all. OOTA are the beasts from the causality loops: those are values that were never written by normal execution of the program (i.e. speculative values). In Valhalla case, the writes that produce the broken hybrid are known and expected writes from the conflicting writers. > > This nomenclature percolates to Valhalla VM code, so some change is needed there as well. > > Additional testing: > - [x] `runtime/valhalla` tests This pull request has been closed without being integrated. ------------- PR: https://git.openjdk.org/valhalla/pull/428 From duke at openjdk.org Mon Jan 1 20:50:57 2024 From: duke at openjdk.org (duke) Date: Mon, 1 Jan 2024 20:50:57 GMT Subject: [lworld] Withdrawn: Make "PrimitiveParameterizedClass.default" a poly expression. In-Reply-To: References: Message-ID: On Fri, 19 Mar 2021 00:01:50 GMT, Jesper Steen M?ller wrote: > Make .default a separate node type in the parser. > > ## Issue > [JDK-8211914](https://bugs.openjdk.java.net/browse/JDK-8211914): [lworld] Javac should support type inference for default value creation > > Note: The Linux x86 builds in GitHub actions seem to fail with something completely unrelated to these changes. This pull request has been closed without being integrated. ------------- PR: https://git.openjdk.org/valhalla/pull/369 From duke at openjdk.org Tue Jan 9 01:26:54 2024 From: duke at openjdk.org (duke) Date: Tue, 9 Jan 2024 01:26:54 GMT Subject: Withdrawn: 8318903: [lw5] null-restricted storage API points In-Reply-To: <_uv7au7DtSGHx81AXLsGo9K5I04CI-Ptv0VIY0gxJdw=.a9785da9-eb09-4309-aba1-fbf7b7fae167@github.com> References: <_uv7au7DtSGHx81AXLsGo9K5I04CI-Ptv0VIY0gxJdw=.a9785da9-eb09-4309-aba1-fbf7b7fae167@github.com> Message-ID: On Thu, 26 Oct 2023 15:24:19 GMT, Vicente Romero wrote: > internal null-restricted storage API points This pull request has been closed without being integrated. ------------- PR: https://git.openjdk.org/valhalla/pull/942 From vromero at openjdk.org Fri Jan 12 21:23:39 2024 From: vromero at openjdk.org (Vicente Romero) Date: Fri, 12 Jan 2024 21:23:39 GMT Subject: Integrated: Merge lworld Message-ID: Merge lworld into lw5 ------------- Commit messages: - Merge branch 'lworld' into lw5_merge_lworld - Merge lworld - Merge lworld - Merge lworld - 8318117: [lw5] create a switch for null-restricted types - 8316628: [lw5] remove vnew, aconst_init, and withfield - Merge lworld - 8316561: [lw5] class file attribute NullRestricted shouldn't be generated for arrays - 8316325: [lw5] sync javac with the current JVMS, particularly assertions on new class attributes - Merge lworld - ... and 27 more: https://git.openjdk.org/valhalla/compare/474f876f...ee747015 The merge commit only contains trivial merges, so no merge-specific webrevs have been generated. Changes: https://git.openjdk.org/valhalla/pull/966/files Stats: 8244 lines in 293 files changed: 4473 ins; 2614 del; 1157 mod Patch: https://git.openjdk.org/valhalla/pull/966.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/966/head:pull/966 PR: https://git.openjdk.org/valhalla/pull/966 From vromero at openjdk.org Fri Jan 12 21:23:39 2024 From: vromero at openjdk.org (Vicente Romero) Date: Fri, 12 Jan 2024 21:23:39 GMT Subject: Integrated: Merge lworld In-Reply-To: References: Message-ID: On Fri, 12 Jan 2024 21:17:38 GMT, Vicente Romero wrote: > Merge lworld into lw5 This pull request has now been integrated. Changeset: f3b3c788 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/f3b3c788b3208e9431bf504ea55d2c5e4c828b9d Stats: 5190 lines in 99 files changed: 1289 ins; 3025 del; 876 mod Merge lworld ------------- PR: https://git.openjdk.org/valhalla/pull/966 From davidalayachew at gmail.com Sat Jan 13 00:54:44 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Fri, 12 Jan 2024 19:54:44 -0500 Subject: Question -- Are null-restricted arrays a possible future? Message-ID: Hello Valhalla Dev Team, I see the JEP Draft for Null-Restricted Value Class Types ( https://openjdk.org/jeps/8316779), and it looks beautiful. I am extremely excited at the possibilities. At the very bottom of the draft, there is a tiny snippet --- "More general support for nullness features will be explored in a future JEP." Any chance that we could be looking at arrays that cannot be null? And to be clear, I am not saying arrays that cannot contain null. This JEP Draft already confirms that that is not only a possibility, but might very well become reality (only if this draft goes live, and then the feature exits preview into GA). No, I mean that the array itself cannot be null. Any possibility? And apologies if the answer is obvious. I don't want to make any assumptions here, since I am still wrapping my mind around how Valhalla made all of this stuff possible. I figured it's better to just ask. Thank you for your time and help! David Alayachew -------------- next part -------------- An HTML attachment was scrubbed... URL: From liangchenblue at gmail.com Sat Jan 13 06:39:50 2024 From: liangchenblue at gmail.com (-) Date: Sat, 13 Jan 2024 00:39:50 -0600 Subject: Question -- Are null-restricted arrays a possible future? In-Reply-To: References: Message-ID: Hello Alayachew, Before we answer this question, let's first look at arrays, both in Java and in other more native languages like C. We all know all Java arrays are stored in heap and variably-sized, while in native languages like C, there are stack-based arrays, but they have a constant length at compile time and can be non-null. To make an array field itself to be non-null, it most likely will be handled as part of the more general-purpose non-null object pointer restriction; since in the inlined/flattened layout, the array field is going to be a pointer to the heap, like other non-value objects. Compared to regular objects, there are 2 ways to achieve a non-null array: 1. You can probably generate a value record with a fixed number of fields to serve as a fixed-size array. And this is effectively the same as constant-sized arrays offered by C for stack usages, and Valhalla will be able to handle it like regular value classes (though it might refuse to inline because the object is too big) 2. All Java arrays have a zero-length value for its own type; we can potentially designate, say, any new Class[0] as the non-null default value of a Class[]. But this will hurt the existing identity assumptions around zero-length arrays. Also make note that Java arrays are polymorphic, i.e. you can assign a new Class[0] to a field of type Object[] or ConstantDesc[], while value classes are not. Alternative 1 sacrifices this polymorphism, while alternative 2 might bring confusion over a new Class[0] and a new Object[0] in an Object[] field, and I'm not quite sure about the impact on their heap layout yet. On Fri, Jan 12, 2024 at 6:55?PM David Alayachew wrote: > Hello Valhalla Dev Team, > > I see the JEP Draft for Null-Restricted Value Class Types ( > https://openjdk.org/jeps/8316779), and it looks beautiful. I am extremely > excited at the possibilities. > > At the very bottom of the draft, there is a tiny snippet --- "More general > support for nullness features will be explored in a future JEP." > > Any chance that we could be looking at arrays that cannot be null? > > And to be clear, I am not saying arrays that cannot contain null. This JEP > Draft already confirms that that is not only a possibility, but might very > well become reality (only if this draft goes live, and then the feature > exits preview into GA). > > No, I mean that the array itself cannot be null. Any possibility? And > apologies if the answer is obvious. I don't want to make any assumptions > here, since I am still wrapping my mind around how Valhalla made all of > this stuff possible. I figured it's better to just ask. > > Thank you for your time and help! > David Alayachew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From redio.development at gmail.com Sat Jan 13 14:39:38 2024 From: redio.development at gmail.com (Red IO) Date: Sat, 13 Jan 2024 15:39:38 +0100 Subject: Question -- Are null-restricted arrays a possible future? In-Reply-To: References: Message-ID: Just assume we have an array of non null strings that itself can not be null. 1. We would need a syntax for non-null value arrays. Something like: String! [!] array = new String! [!] { "Foo", "Bar"} ; 2. Rather or not the compiler decides to flatten it or not is not directly interesting to the user. The non null guarantee is the interesting part. Optimizations the compiler can do are always just a bonus. My personal opinion: Excluding arrays from the ability to be null restricted would be a weird exception as they are objects just like every other class. The only problem is that they don't have a class declaration but are implicitly generated. Great regards RedIODev On Sat, Jan 13, 2024, 09:24 - wrote: > Hello Alayachew, > Before we answer this question, let's first look at arrays, both in Java > and in other more native languages like C. > We all know all Java arrays are stored in heap and variably-sized, while > in native languages like C, there are stack-based arrays, but they have a > constant length at compile time and can be non-null. > To make an array field itself to be non-null, it most likely will be > handled as part of the more general-purpose non-null object pointer > restriction; since in the inlined/flattened layout, the array field is > going to be a pointer to the heap, like other non-value objects. > > Compared to regular objects, there are 2 ways to achieve a non-null array: > 1. You can probably generate a value record with a fixed number of fields > to serve as a fixed-size array. And this is effectively the same as > constant-sized arrays offered by C for stack usages, and Valhalla will be > able to handle it like regular value classes (though it might refuse to > inline because the object is too big) > 2. All Java arrays have a zero-length value for its own type; we can > potentially designate, say, any new Class[0] as the non-null default value > of a Class[]. But this will hurt the existing identity assumptions around > zero-length arrays. > > Also make note that Java arrays are polymorphic, i.e. you can assign a new > Class[0] to a field of type Object[] or ConstantDesc[], while value classes > are not. Alternative 1 sacrifices this polymorphism, while alternative 2 > might bring confusion over a new Class[0] and a new Object[0] in an > Object[] field, and I'm not quite sure about the impact on their heap > layout yet. > > On Fri, Jan 12, 2024 at 6:55?PM David Alayachew > wrote: > >> Hello Valhalla Dev Team, >> >> I see the JEP Draft for Null-Restricted Value Class Types ( >> https://openjdk.org/jeps/8316779), and it looks beautiful. I am >> extremely excited at the possibilities. >> >> At the very bottom of the draft, there is a tiny snippet --- "More >> general support for nullness features will be explored in a future JEP." >> >> Any chance that we could be looking at arrays that cannot be null? >> >> And to be clear, I am not saying arrays that cannot contain null. This >> JEP Draft already confirms that that is not only a possibility, but might >> very well become reality (only if this draft goes live, and then the >> feature exits preview into GA). >> >> No, I mean that the array itself cannot be null. Any possibility? And >> apologies if the answer is obvious. I don't want to make any assumptions >> here, since I am still wrapping my mind around how Valhalla made all of >> this stuff possible. I figured it's better to just ask. >> >> Thank you for your time and help! >> David Alayachew >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sun Jan 14 14:15:48 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Sun, 14 Jan 2024 14:15:48 +0000 Subject: Question -- Are null-restricted arrays a possible future? In-Reply-To: References: Message-ID: <9D5FC2F2-E00E-4810-8B47-F0FC8EB4EE80@oracle.com> That would fall into the category of ?more general support for nullness features.? Arrays are reference types. This JEP starts deliberately narrowly ? with value types. On Jan 12, 2024, at 7:54 PM, David Alayachew > wrote: Hello Valhalla Dev Team, I see the JEP Draft for Null-Restricted Value Class Types (https://openjdk.org/jeps/8316779), and it looks beautiful. I am extremely excited at the possibilities. At the very bottom of the draft, there is a tiny snippet --- "More general support for nullness features will be explored in a future JEP." Any chance that we could be looking at arrays that cannot be null? And to be clear, I am not saying arrays that cannot contain null. This JEP Draft already confirms that that is not only a possibility, but might very well become reality (only if this draft goes live, and then the feature exits preview into GA). No, I mean that the array itself cannot be null. Any possibility? And apologies if the answer is obvious. I don't want to make any assumptions here, since I am still wrapping my mind around how Valhalla made all of this stuff possible. I figured it's better to just ask. Thank you for your time and help! David Alayachew -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Sun Jan 14 16:19:02 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Sun, 14 Jan 2024 11:19:02 -0500 Subject: Question -- Are null-restricted arrays a possible future? In-Reply-To: References: Message-ID: Thank you all for your responses! Glad to hear that it is a possibility. Just wanted to confirm. On Fri, Jan 12, 2024 at 7:54?PM David Alayachew wrote: > Hello Valhalla Dev Team, > > I see the JEP Draft for Null-Restricted Value Class Types ( > https://openjdk.org/jeps/8316779), and it looks beautiful. I am extremely > excited at the possibilities. > > At the very bottom of the draft, there is a tiny snippet --- "More general > support for nullness features will be explored in a future JEP." > > Any chance that we could be looking at arrays that cannot be null? > > And to be clear, I am not saying arrays that cannot contain null. This JEP > Draft already confirms that that is not only a possibility, but might very > well become reality (only if this draft goes live, and then the feature > exits preview into GA). > > No, I mean that the array itself cannot be null. Any possibility? And > apologies if the answer is obvious. I don't want to make any assumptions > here, since I am still wrapping my mind around how Valhalla made all of > this stuff possible. I figured it's better to just ask. > > Thank you for your time and help! > David Alayachew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsimms at openjdk.org Tue Jan 16 09:26:29 2024 From: dsimms at openjdk.org (David Simms) Date: Tue, 16 Jan 2024 09:26:29 GMT Subject: [lworld] RFR: Merge jdk Message-ID: Merge jdk-22+16 ------------- Commit messages: - Preload Attribute in classfile API - Merge tag 'jdk-22+16' into lworld_merge_jdk_22_16 - 8316627: JViewport Test headless failure - 8316156: ByteArrayInputStream.transferTo causes MaxDirectMemorySize overflow - 8316532: Native library copying in BuildMicrobenchmark.gmk cause dups on macOS - 8315869: UseHeavyMonitors not used - 8316562: serviceability/sa/jmap-hprof/JMapHProfLargeHeapTest.java times out after JDK-8314829 - 8296246: Update Unicode Data Files to Version 15.1.0 - 8316149: Open source several Swing JTree JViewport KeyboardManager tests - 8315880: change LockingMode default from LM_LEGACY to LM_LIGHTWEIGHT - ... and 103 more: https://git.openjdk.org/valhalla/compare/474f876f...9314b3ea The webrevs contain the adjustments done while merging with regards to each parent branch: - lworld: https://webrevs.openjdk.org/?repo=valhalla&pr=967&range=00.0 - jdk: https://webrevs.openjdk.org/?repo=valhalla&pr=967&range=00.1 Changes: https://git.openjdk.org/valhalla/pull/967/files Stats: 59998 lines in 1093 files changed: 16989 ins; 11626 del; 31383 mod Patch: https://git.openjdk.org/valhalla/pull/967.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/967/head:pull/967 PR: https://git.openjdk.org/valhalla/pull/967 From dsimms at openjdk.org Tue Jan 16 10:22:36 2024 From: dsimms at openjdk.org (David Simms) Date: Tue, 16 Jan 2024 10:22:36 GMT Subject: [lworld] RFR: Merge jdk [v2] In-Reply-To: References: Message-ID: > Merge jdk-22+16 David Simms has updated the pull request incrementally with one additional commit since the last revision: Problem listed deferred test failures ------------- Changes: - all: https://git.openjdk.org/valhalla/pull/967/files - new: https://git.openjdk.org/valhalla/pull/967/files/9314b3ea..e951196f Webrevs: - full: https://webrevs.openjdk.org/?repo=valhalla&pr=967&range=01 - incr: https://webrevs.openjdk.org/?repo=valhalla&pr=967&range=00-01 Stats: 14 lines in 2 files changed: 14 ins; 0 del; 0 mod Patch: https://git.openjdk.org/valhalla/pull/967.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/967/head:pull/967 PR: https://git.openjdk.org/valhalla/pull/967 From dsimms at openjdk.org Tue Jan 16 10:33:26 2024 From: dsimms at openjdk.org (David Simms) Date: Tue, 16 Jan 2024 10:33:26 GMT Subject: [lworld] Integrated: Merge jdk In-Reply-To: References: Message-ID: On Tue, 16 Jan 2024 09:17:37 GMT, David Simms wrote: > Merge jdk-22+16 This pull request has now been integrated. Changeset: fb7ea2db Author: David Simms URL: https://git.openjdk.org/valhalla/commit/fb7ea2dbf5a536235cd1c72b316a3e8e3b2fcc89 Stats: 60012 lines in 1094 files changed: 17003 ins; 11626 del; 31383 mod Merge jdk Merge jdk-22+16 ------------- PR: https://git.openjdk.org/valhalla/pull/967 From dsimms at openjdk.org Wed Jan 17 09:42:00 2024 From: dsimms at openjdk.org (David Simms) Date: Wed, 17 Jan 2024 09:42:00 GMT Subject: [lworld] RFR: Merge jdk Message-ID: Merge jdk-22+17 ------------- Commit messages: - Post merge compilation error - Merge jdk - 8316661: CompilerThread leaks CodeBlob memory when dynamically stopping compiler thread in non-product - 8315721: CloseRace.java#id0 fails transiently on libgraal - 8315966: Relativize initial_sp in interpreter frames - 8316924: java/lang/Thread/virtual/stress/ParkALot.java times out - 8316710: Exclude java/awt/font/Rotate/RotatedTextTest.java - 8299915: Remove ArrayAllocatorMallocLimit and associated code - 8316417: ObjectMonitorIterator does not return the most recent monitor and is incorrect if no monitors exists - 8293176: SSLEngine handshaker does not send an alert after a bad parameters - ... and 89 more: https://git.openjdk.org/valhalla/compare/fb7ea2db...d9554703 The webrevs contain the adjustments done while merging with regards to each parent branch: - lworld: https://webrevs.openjdk.org/?repo=valhalla&pr=968&range=00.0 - jdk: https://webrevs.openjdk.org/?repo=valhalla&pr=968&range=00.1 Changes: https://git.openjdk.org/valhalla/pull/968/files Stats: 10943 lines in 411 files changed: 8540 ins; 993 del; 1410 mod Patch: https://git.openjdk.org/valhalla/pull/968.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/968/head:pull/968 PR: https://git.openjdk.org/valhalla/pull/968 From dsimms at openjdk.org Wed Jan 17 09:49:16 2024 From: dsimms at openjdk.org (David Simms) Date: Wed, 17 Jan 2024 09:49:16 GMT Subject: [lworld] Integrated: Merge jdk In-Reply-To: References: Message-ID: <3STo2DxxFYqv5v9wwVawdvTDS6zh0vRAzYuywzykff0=.464475e7-7c82-4863-baba-a6c4fef581d1@github.com> On Wed, 17 Jan 2024 09:24:08 GMT, David Simms wrote: > Merge jdk-22+17 This pull request has now been integrated. Changeset: df2362fc Author: David Simms URL: https://git.openjdk.org/valhalla/commit/df2362fcbac0ed0bcca0083d325fc6054b02c2fd Stats: 10943 lines in 411 files changed: 8540 ins; 993 del; 1410 mod Merge jdk Merge jdk-22+17 ------------- PR: https://git.openjdk.org/valhalla/pull/968 From dsimms at openjdk.org Wed Jan 17 16:56:40 2024 From: dsimms at openjdk.org (David Simms) Date: Wed, 17 Jan 2024 16:56:40 GMT Subject: [lworld] RFR: Merge jdk Message-ID: Merge jdk-22+18, jdk-22+19 and jdk-22+20 ------------- Commit messages: - Merge tag 'jdk-22+20' into lworld_merge_jdk_22_18 - 8318363: Foreign benchmarks fail to build on some platforms - 8318183: C2: VM may crash after hitting node limit - 8315974: Make fields final in 'com.sun.crypto.provider' package - 8317886: Add @sealedGraph to ByteBuffer - 8318365: Test runtime/cds/appcds/sharedStrings/InternSharedString.java fails after JDK-8311538 - 8309966: Enhanced TLS connections - 8286503: Enhance security classes - 8297856: Improve handling of Bidi characters - 8296581: Better system proxy support - ... and 254 more: https://git.openjdk.org/valhalla/compare/df2362fc...11f92478 The webrevs contain the adjustments done while merging with regards to each parent branch: - lworld: https://webrevs.openjdk.org/?repo=valhalla&pr=969&range=00.0 - jdk: https://webrevs.openjdk.org/?repo=valhalla&pr=969&range=00.1 Changes: https://git.openjdk.org/valhalla/pull/969/files Stats: 34775 lines in 1165 files changed: 20586 ins; 6900 del; 7289 mod Patch: https://git.openjdk.org/valhalla/pull/969.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/969/head:pull/969 PR: https://git.openjdk.org/valhalla/pull/969 From jbhateja at openjdk.org Wed Jan 17 19:44:52 2024 From: jbhateja at openjdk.org (Jatin Bhateja) Date: Wed, 17 Jan 2024 19:44:52 GMT Subject: [lworld+vector] RFR: Merge lworld Message-ID: Merge latest lworld changes into lworld+vector branch. All VectorAPI and Valhalla tests are passing at various AVX levels with and w/o -XX:+DeoptimizeALot. Best Regards, Jatin ------------- Commit messages: - jcheck failure fix - Removing a white space - Merge branch 'lworld' of http://github.com/openjdk/valhalla into merge_lworld_branch - Merge jdk - 8316661: CompilerThread leaks CodeBlob memory when dynamically stopping compiler thread in non-product - 8315721: CloseRace.java#id0 fails transiently on libgraal - 8315966: Relativize initial_sp in interpreter frames - 8316924: java/lang/Thread/virtual/stress/ParkALot.java times out - 8316710: Exclude java/awt/font/Rotate/RotatedTextTest.java - 8299915: Remove ArrayAllocatorMallocLimit and associated code - ... and 780 more: https://git.openjdk.org/valhalla/compare/6d74481c...d6e58623 The webrevs contain the adjustments done while merging with regards to each parent branch: - lworld+vector: https://webrevs.openjdk.org/?repo=valhalla&pr=970&range=00.0 - lworld: https://webrevs.openjdk.org/?repo=valhalla&pr=970&range=00.1 Changes: https://git.openjdk.org/valhalla/pull/970/files Stats: 168937 lines in 4566 files changed: 79424 ins; 38494 del; 51019 mod Patch: https://git.openjdk.org/valhalla/pull/970.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/970/head:pull/970 PR: https://git.openjdk.org/valhalla/pull/970 From vromero at openjdk.org Wed Jan 17 20:16:26 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 17 Jan 2024 20:16:26 GMT Subject: Integrated: Merge lworld In-Reply-To: References: Message-ID: On Wed, 17 Jan 2024 20:06:48 GMT, Vicente Romero wrote: > Merge branch 'lworld' into lw5_merge_lworld > # Conflicts: > # src/jdk.jdeps/share/classes/com/sun/tools/javap/AttributeWriter.java This pull request has now been integrated. Changeset: c3423535 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/c3423535dea4b17f17e0f83751a3693ee9856514 Stats: 70935 lines in 1462 files changed: 25521 ins; 12615 del; 32799 mod Merge lworld ------------- PR: https://git.openjdk.org/valhalla/pull/971 From vromero at openjdk.org Wed Jan 17 20:16:25 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 17 Jan 2024 20:16:25 GMT Subject: Integrated: Merge lworld Message-ID: Merge branch 'lworld' into lw5_merge_lworld # Conflicts: # src/jdk.jdeps/share/classes/com/sun/tools/javap/AttributeWriter.java ------------- Commit messages: - Merge branch 'lworld' into lw5_merge_lworld - Merge lworld - Merge lworld - Merge lworld - Merge lworld - 8318117: [lw5] create a switch for null-restricted types - 8316628: [lw5] remove vnew, aconst_init, and withfield - Merge lworld - 8316561: [lw5] class file attribute NullRestricted shouldn't be generated for arrays - 8316325: [lw5] sync javac with the current JVMS, particularly assertions on new class attributes - ... and 28 more: https://git.openjdk.org/valhalla/compare/df2362fc...c091d270 The merge commit only contains trivial merges, so no merge-specific webrevs have been generated. Changes: https://git.openjdk.org/valhalla/pull/971/files Stats: 8226 lines in 292 files changed: 4455 ins; 2614 del; 1157 mod Patch: https://git.openjdk.org/valhalla/pull/971.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/971/head:pull/971 PR: https://git.openjdk.org/valhalla/pull/971 From jbhateja at openjdk.org Thu Jan 18 03:08:10 2024 From: jbhateja at openjdk.org (Jatin Bhateja) Date: Thu, 18 Jan 2024 03:08:10 GMT Subject: [lworld+vector] RFR: Merge lworld [v2] In-Reply-To: References: Message-ID: > Merge latest lworld changes into lworld+vector branch. > > All VectorAPI and Valhalla tests are passing at various AVX levels with and w/o -XX:+DeoptimizeALot. > > Best Regards, > Jatin Jatin Bhateja has updated the pull request with a new target base due to a merge or a rebase. The pull request now contains 20 commits: - jcheck failure fix - Removing a white space - Merge branch 'lworld' of http://github.com/openjdk/valhalla into merge_lworld_branch - 8319945: [lworld+vector] Fix vector api jtreg crash with "-XX:-EnableVectorSupport" Reviewed-by: jbhateja - 8319972: [lworld+vector] Enable intrinsification of Unsafe.finishPrivateBuffer. Reviewed-by: xgong - 8319971: [lworld+vector] Fallback implementation cleanups. Reviewed-by: xgong - 8311675: [lworld+vector] Max Species support. Reviewed-by: xgong - 8317699: [lworld+vector] Fix Vector API tests crash with "assert(vbox->is_Phi()) failed: should be phi" Reviewed-by: jbhateja - Merge lworld - 8314980: [lworld+vector] consider scalarization conditions during ciMultiField creation. Reviewed-by: xgong - ... and 10 more: https://git.openjdk.org/valhalla/compare/df2362fc...d6e58623 ------------- Changes: https://git.openjdk.org/valhalla/pull/970/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=970&range=01 Stats: 11700 lines in 149 files changed: 4251 ins; 2688 del; 4761 mod Patch: https://git.openjdk.org/valhalla/pull/970.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/970/head:pull/970 PR: https://git.openjdk.org/valhalla/pull/970 From jbhateja at openjdk.org Thu Jan 18 03:08:12 2024 From: jbhateja at openjdk.org (Jatin Bhateja) Date: Thu, 18 Jan 2024 03:08:12 GMT Subject: [lworld+vector] Integrated: Merge lworld In-Reply-To: References: Message-ID: On Wed, 17 Jan 2024 17:59:10 GMT, Jatin Bhateja wrote: > Merge latest lworld changes into lworld+vector branch. > > All VectorAPI and Valhalla tests are passing at various AVX levels with and w/o -XX:+DeoptimizeALot. > > Best Regards, > Jatin This pull request has now been integrated. Changeset: f3a0fe73 Author: Jatin Bhateja URL: https://git.openjdk.org/valhalla/commit/f3a0fe732e68b856cf9ed81f9a54e24cd6ecb645 Stats: 168937 lines in 4566 files changed: 79424 ins; 38494 del; 51019 mod Merge lworld ------------- PR: https://git.openjdk.org/valhalla/pull/970 From dsimms at openjdk.org Thu Jan 18 06:43:55 2024 From: dsimms at openjdk.org (David Simms) Date: Thu, 18 Jan 2024 06:43:55 GMT Subject: [lworld] Integrated: Merge jdk In-Reply-To: References: Message-ID: On Wed, 17 Jan 2024 16:45:07 GMT, David Simms wrote: > Merge jdk-22+18, jdk-22+19 and jdk-22+20 This pull request has now been integrated. Changeset: c48006df Author: David Simms URL: https://git.openjdk.org/valhalla/commit/c48006dfc05bb0c41ab9ae55ead226356259c46d Stats: 34775 lines in 1165 files changed: 20586 ins; 6900 del; 7289 mod Merge jdk Merge jdk-22+18, jdk-22+19 and jdk-22+20 ------------- PR: https://git.openjdk.org/valhalla/pull/969 From chagedorn at openjdk.org Thu Jan 18 09:09:00 2024 From: chagedorn at openjdk.org (Christian Hagedorn) Date: Thu, 18 Jan 2024 09:09:00 GMT Subject: [lworld] RFR: 8323781: [lworld] Synchronization on inline type does not throw IllegalMonitorStateException with lightweight locking Message-ID: [JDK-8315880](https://bugs.openjdk.org/browse/JDK-8315880) sets `LM_LIGHTWEIGHT` as new default locking mode which revealed an existing bug after the merge of jdk-22+16 (triggers already before the merge by explicitly setting `-XX:LockingMode=2`). JDK-8315880 was backed out again in jdk-22+24 ([JDK-8319253](https://bugs.openjdk.org/browse/JDK-8319253)). But the backout is not yet merged in. The REDO [JDK-8319251](https://bugs.openjdk.org/browse/JDK-8319251) is still open. Maybe we need to add some runs in the CI in the future which test all three locking modes. This patch fixes a problem in the newly added lightweight locking code ([JDK-8291555](https://bugs.openjdk.org/browse/8291555)) where we forgot to mask the inline type bit in the mark word. This patch fixes that and unlists the problemlisted tests. Thanks, Christian ------------- Commit messages: - 8323781: [lworld] Synchronization on inline type does not throw IllegalMonitorStateException with lightweight locking Changes: https://git.openjdk.org/valhalla/pull/972/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=972&range=00 Issue: https://bugs.openjdk.org/browse/JDK-8323781 Stats: 12 lines in 3 files changed: 9 ins; 3 del; 0 mod Patch: https://git.openjdk.org/valhalla/pull/972.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/972/head:pull/972 PR: https://git.openjdk.org/valhalla/pull/972 From thartmann at openjdk.org Thu Jan 18 09:19:30 2024 From: thartmann at openjdk.org (Tobias Hartmann) Date: Thu, 18 Jan 2024 09:19:30 GMT Subject: [lworld] RFR: 8323781: [lworld] Synchronization on inline type does not throw IllegalMonitorStateException with lightweight locking In-Reply-To: References: Message-ID: <931f0aidQu3yyXUgE1XqLMRS9ObZ8MXvVkIb7nHAiFI=.12d1163b-edc9-40cf-97da-43fca9b2eb03@github.com> On Thu, 18 Jan 2024 09:02:51 GMT, Christian Hagedorn wrote: > [JDK-8315880](https://bugs.openjdk.org/browse/JDK-8315880) sets `LM_LIGHTWEIGHT` as new default locking mode which revealed an existing bug after the merge of jdk-22+16 (triggers already before the merge by explicitly setting `-XX:LockingMode=2`). JDK-8315880 was backed out again in jdk-22+24 ([JDK-8319253](https://bugs.openjdk.org/browse/JDK-8319253)). But the backout is not yet merged in. The REDO [JDK-8319251](https://bugs.openjdk.org/browse/JDK-8319251) is still open. Maybe we need to add some runs in the CI in the future which test all three locking modes. > > This patch fixes a problem in the newly added lightweight locking code ([JDK-8291555](https://bugs.openjdk.org/browse/8291555)) where we forgot to mask the inline type bit in the mark word. This patch fixes that and unlists the problemlisted tests. > > Thanks, > Christian That looks good to me. Thanks for fixing! ------------- Marked as reviewed by thartmann (Committer). PR Review: https://git.openjdk.org/valhalla/pull/972#pullrequestreview-1829202705 From chagedorn at openjdk.org Thu Jan 18 09:19:31 2024 From: chagedorn at openjdk.org (Christian Hagedorn) Date: Thu, 18 Jan 2024 09:19:31 GMT Subject: [lworld] RFR: 8323781: [lworld] Synchronization on inline type does not throw IllegalMonitorStateException with lightweight locking In-Reply-To: References: Message-ID: On Thu, 18 Jan 2024 09:02:51 GMT, Christian Hagedorn wrote: > [JDK-8315880](https://bugs.openjdk.org/browse/JDK-8315880) sets `LM_LIGHTWEIGHT` as new default locking mode which revealed an existing bug after the merge of jdk-22+16 (triggers already before the merge by explicitly setting `-XX:LockingMode=2`). JDK-8315880 was backed out again in jdk-22+24 ([JDK-8319253](https://bugs.openjdk.org/browse/JDK-8319253)). But the backout is not yet merged in. The REDO [JDK-8319251](https://bugs.openjdk.org/browse/JDK-8319251) is still open. Maybe we need to add some runs in the CI in the future which test all three locking modes. > > This patch fixes a problem in the newly added lightweight locking code ([JDK-8291555](https://bugs.openjdk.org/browse/8291555)) where we forgot to mask the inline type bit in the mark word. This patch fixes that and unlists the problemlisted tests. > > Thanks, > Christian Thanks Tobias for your review! ------------- PR Comment: https://git.openjdk.org/valhalla/pull/972#issuecomment-1898086675 From chagedorn at openjdk.org Thu Jan 18 12:13:35 2024 From: chagedorn at openjdk.org (Christian Hagedorn) Date: Thu, 18 Jan 2024 12:13:35 GMT Subject: [lworld] Integrated: 8323781: [lworld] Synchronization on inline type does not throw IllegalMonitorStateException with lightweight locking In-Reply-To: References: Message-ID: On Thu, 18 Jan 2024 09:02:51 GMT, Christian Hagedorn wrote: > [JDK-8315880](https://bugs.openjdk.org/browse/JDK-8315880) sets `LM_LIGHTWEIGHT` as new default locking mode which revealed an existing bug after the merge of jdk-22+16 (triggers already before the merge by explicitly setting `-XX:LockingMode=2`). JDK-8315880 was backed out again in jdk-22+24 ([JDK-8319253](https://bugs.openjdk.org/browse/JDK-8319253)). But the backout is not yet merged in. The REDO [JDK-8319251](https://bugs.openjdk.org/browse/JDK-8319251) is still open. Maybe we need to add some runs in the CI in the future which test all three locking modes. > > This patch fixes a problem in the newly added lightweight locking code ([JDK-8291555](https://bugs.openjdk.org/browse/8291555)) where we forgot to mask the inline type bit in the mark word. This patch fixes that and unlists the problemlisted tests. > > Thanks, > Christian This pull request has now been integrated. Changeset: 16fa7709 Author: Christian Hagedorn Committer: Tobias Hartmann URL: https://git.openjdk.org/valhalla/commit/16fa7709f15465e736dd3a83707ffe88d7bc61bd Stats: 12 lines in 3 files changed: 9 ins; 3 del; 0 mod 8323781: [lworld] Synchronization on inline type does not throw IllegalMonitorStateException with lightweight locking Reviewed-by: thartmann ------------- PR: https://git.openjdk.org/valhalla/pull/972 From jbossons at gmail.com Thu Jan 18 19:38:58 2024 From: jbossons at gmail.com (John Bossons) Date: Thu, 18 Jan 2024 14:38:58 -0500 Subject: Null-restricted types: Why so complicated? Message-ID: Hi all, Maybe I am missing something, but the proposal seems to be trying to do too much. Specifically: Why not simply provide that appending ! to a type specification for an object (field, array element, or parameter) means that that the object is not only null-restricted but also never zero and necessarily non-atomic unless small? Why complicate the specification with an implicit constructor that a developer will never explicitly invoke? Why permit a developer to 'opt in' to non-atomic? Sure, that means trying to read a zero value triggers a NPE. That just means that a type that can legitimately have a zero value cannot be specified as null-restricted, since a zero value (e.g. a {null, null} Name) is the equivalent of a null unrestricted value object. Why go beyond that? If a non-null zero value is possible, the type cannot be null-restricted and so can only be an unrestricted JEP 401 value type. End of story. With respect to non-atomic, what is new? Yes, unexpected instances may occur without synchronization if the object is larger than the word size of the implementation. Why do we need to extend a LooselyConsistentValue interface to know/permit that? Can we not keep this 'simple' (if that word has meaning in this context)? What am I missing? John -- Phone: (416) 450-3584 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From liangchenblue at gmail.com Thu Jan 18 21:55:52 2024 From: liangchenblue at gmail.com (-) Date: Thu, 18 Jan 2024 15:55:52 -0600 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Hi John, On Thu, Jan 18, 2024 at 2:30?PM John Bossons wrote: > Hi all, > > Maybe I am missing something, but the proposal seems to be trying to do > too much. > > Specifically: Why not simply provide that appending ! to a type > specification for an object (field, array element, or parameter) means that > that the object is not only null-restricted but also never zero and > necessarily non-atomic unless small? > First, a reminder that some objects cannot be non-atomic, mostly when fields have dependencies/constraints on each other: if you have a range, you cannot allow its lower bound to be larger than its upper bound. Non-atomic representations cannot avoid this pitfall. Also you seem to misunderstand non-atomic: if an object is non-atomic, each of its fields can update independently from each other, so a 3-d position can be non-atomic, but not so for a range. Non-atomicity is dangerous, and it should not be the default. However, if an atomic class is small enough, like OptionalInt (as now many architecture has like atomic handling of 16 bytes etc.) JVM may choose to apply non-atomic optimizations to them for better performance without violating their object constraints. > > Why complicate the specification with an implicit constructor that a > developer will never explicitly invoke? Why permit a developer to 'opt in' > to non-atomic? > The implicit constructor can always be called; its existence asks programmers to affirm that the zero-filled inlined instance is a valid instance. And this instance is different from a null, as null is a pointer, yet the zero-instance has a different size defined by the class layout in the stack/heap. > > Sure, that means trying to read a zero value triggers a NPE. That just > means that a type that can legitimately have a zero value cannot be > specified as null-restricted, since a zero value (e.g. a {null, null} Name) > is the equivalent of a null unrestricted value object. Why go beyond that? > If a non-null zero value is possible, the type cannot be null-restricted > and so can only be an unrestricted JEP 401 value type. End of story. > You see the inlined zero instance and the null pointer have different sizes, and thus they are not exchangeable. Converting the inlined zero instance to null to throw NPE is complex and hurtful to performance as you will scan unrelated bits for almost every field access. And for unrestricted value type, yes, they exist and can possibly be inlined as well if the restricted type is small enough (i.e. has space for extra bit indicating nullity) But reminder, the nullity bit itself isn't even non-atomic with (depends on) the rest of the object! You don't want the nullity to indicate null while the rest of the object indicate some sort of non-null value, which can happen in a non-atomic context. > > With respect to non-atomic, what is new? Yes, unexpected instances may > occur without synchronization if the object is larger than the word size of > the implementation. Why do we need to extend a LooselyConsistentValue > interface to know/permit that? > Unexpected instances don't occur without synchronization if you use finals, such as in Java's String or immutable List.of(). These APIs may capture any "permitted value" from the arrays passed in, but once constructed, the captured value remains constant no matter which thread observes the String/List object reference. (Technically, JVM implements this with a store-store fence between end of field writes in the constructor and object reference is shared anywhere, and a load-load fence between object reference read and field read) Value classes is about the safety of final fields in programming instead of the close encounter of third kinds of synchronization, volatiles, and fences. > > Can we not keep this 'simple' (if that word has meaning in this context)? > What am I missing? > I think you are missing a bit about how the layout (inlining is represented in memory) and value classes (the thread safety its final offers) work, and what "non-atomic" means. Feel free to question more. > > John > > > -- > Phone: (416) 450-3584 (cell) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chagedorn at openjdk.org Fri Jan 19 08:17:55 2024 From: chagedorn at openjdk.org (Christian Hagedorn) Date: Fri, 19 Jan 2024 08:17:55 GMT Subject: [lworld] RFR: 8324114: [lworld] GTestWrapper.java fails with "assert(LockingMode == LM_LEGACY) failed: should only be called with legacy stack locking" Message-ID: This patch fixes the gtests which fail with the lightweight locking mode (`-XX:LockingMode=2`) but also with heavy monitors (`-XX:LockingMode=0`). I've only noticed while working on this bug that in the meantime, we temporary enforce legacy stack-locking (`-XX:LockingMode=1`) since merging jdk-22+17: https://github.com/openjdk/valhalla/blob/16fa7709f15465e736dd3a83707ffe88d7bc61bd/src/hotspot/share/runtime/arguments.cpp#L1922-L1925 Nevertheless, I think it's still worth to propose these test fixes now since we probably want to support lightweight locking at some point. The gtest used `markWork::has_locker()` which only works with `LM_LEGACY`. For `LM_LIGHTWEIGHT`, we need to call `markWord::is_fast_locked()`. Additionally, when running with `LM_MONITOR`, the test to match some things in the mark word fails because we only use heavy monitors and do not use the mark word bits. Thanks, Christian ------------- Commit messages: - 8324114: [lworld] GTestWrapper.java fails with "assert(LockingMode == LM_LEGACY) failed: should only be called with legacy stack locking" Changes: https://git.openjdk.org/valhalla/pull/973/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=973&range=00 Issue: https://bugs.openjdk.org/browse/JDK-8324114 Stats: 18 lines in 1 file changed: 14 ins; 0 del; 4 mod Patch: https://git.openjdk.org/valhalla/pull/973.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/973/head:pull/973 PR: https://git.openjdk.org/valhalla/pull/973 From jbhateja at openjdk.org Fri Jan 19 12:47:28 2024 From: jbhateja at openjdk.org (Jatin Bhateja) Date: Fri, 19 Jan 2024 12:47:28 GMT Subject: [lworld+fp16] RFR: Merge lworld Message-ID: <8fZ3w3xmfyWb6RaBQOW0GlYIHSGI5Onw3pEC9Cwg3Qg=.2a54f0dd-4179-4d47-ae70-1e33a944c4d1@github.com> Merge latest changes from lworld. FP16 specific tests and tier1 regressions are clean. Best Regards, Jatin ------------- Commit messages: - Merge branch 'lworld' of http://github.com/openjdk/valhalla into merge_lworld_to_fp16 - 8323781: [lworld] Synchronization on inline type does not throw IllegalMonitorStateException with lightweight locking - Merge jdk - 8318363: Foreign benchmarks fail to build on some platforms - 8318183: C2: VM may crash after hitting node limit - 8315974: Make fields final in 'com.sun.crypto.provider' package - 8317886: Add @sealedGraph to ByteBuffer - 8318365: Test runtime/cds/appcds/sharedStrings/InternSharedString.java fails after JDK-8311538 - 8309966: Enhanced TLS connections - 8286503: Enhance security classes - ... and 1220 more: https://git.openjdk.org/valhalla/compare/6f1ecb88...a39c9036 The webrevs contain the adjustments done while merging with regards to each parent branch: - lworld+fp16: https://webrevs.openjdk.org/?repo=valhalla&pr=974&range=00.0 - lworld: https://webrevs.openjdk.org/?repo=valhalla&pr=974&range=00.1 Changes: https://git.openjdk.org/valhalla/pull/974/files Stats: 166439 lines in 4539 files changed: 81714 ins; 35523 del; 49202 mod Patch: https://git.openjdk.org/valhalla/pull/974.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/974/head:pull/974 PR: https://git.openjdk.org/valhalla/pull/974 From jbhateja at openjdk.org Fri Jan 19 13:41:09 2024 From: jbhateja at openjdk.org (Jatin Bhateja) Date: Fri, 19 Jan 2024 13:41:09 GMT Subject: [lworld+fp16] RFR: Merge lworld [v2] In-Reply-To: <8fZ3w3xmfyWb6RaBQOW0GlYIHSGI5Onw3pEC9Cwg3Qg=.2a54f0dd-4179-4d47-ae70-1e33a944c4d1@github.com> References: <8fZ3w3xmfyWb6RaBQOW0GlYIHSGI5Onw3pEC9Cwg3Qg=.2a54f0dd-4179-4d47-ae70-1e33a944c4d1@github.com> Message-ID: > Merge latest changes from lworld. > > FP16 specific tests and tier1 regressions are clean. > > Best Regards, > Jatin Jatin Bhateja has updated the pull request with a new target base due to a merge or a rebase. The pull request now contains four commits: - Merge branch 'lworld' of http://github.com/openjdk/valhalla into merge_lworld_to_fp16 - [lworld+fp16] Merge lworld - 8308363: Initial compiler support for FP16 scalar operations. Reviewed-by: sviswanathan - Merge lworld ------------- Changes: https://git.openjdk.org/valhalla/pull/974/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=974&range=01 Stats: 1548 lines in 41 files changed: 1506 ins; 3 del; 39 mod Patch: https://git.openjdk.org/valhalla/pull/974.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/974/head:pull/974 PR: https://git.openjdk.org/valhalla/pull/974 From jbhateja at openjdk.org Fri Jan 19 13:41:11 2024 From: jbhateja at openjdk.org (Jatin Bhateja) Date: Fri, 19 Jan 2024 13:41:11 GMT Subject: [lworld+fp16] Integrated: Merge lworld In-Reply-To: <8fZ3w3xmfyWb6RaBQOW0GlYIHSGI5Onw3pEC9Cwg3Qg=.2a54f0dd-4179-4d47-ae70-1e33a944c4d1@github.com> References: <8fZ3w3xmfyWb6RaBQOW0GlYIHSGI5Onw3pEC9Cwg3Qg=.2a54f0dd-4179-4d47-ae70-1e33a944c4d1@github.com> Message-ID: On Fri, 19 Jan 2024 12:24:20 GMT, Jatin Bhateja wrote: > Merge latest changes from lworld. > > FP16 specific tests and tier1 regressions are clean. > > Best Regards, > Jatin This pull request has now been integrated. Changeset: 78656b1c Author: Jatin Bhateja URL: https://git.openjdk.org/valhalla/commit/78656b1c913a8a3d90fa33e68b9a924b9691b58f Stats: 166439 lines in 4539 files changed: 81714 ins; 35523 del; 49202 mod Merge lworld ------------- PR: https://git.openjdk.org/valhalla/pull/974 From jbossons at gmail.com Fri Jan 19 16:07:24 2024 From: jbossons at gmail.com (John Bossons) Date: Fri, 19 Jan 2024 11:07:24 -0500 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Thanks for your comments. I was not sufficiently explicit. Let me focus on implicit. I guess my dislike is of introducing a 'fake' constructor into the definition of a class. I say 'fake' because, as I understand it, the only purpose of the implicit constructor is to indicate to the JVM/compiler that a never-null instance can be created. But in Java idiom that means that a developer can invoke the public implicit constructor, which will cause confusion. Maybe it would be better to require a potentially null-restricted class to extend a marker interface ('extends NeverNullPossible'? Or maybe, looking ahead to my next comment, 'extends AllZerosIsNull'?). That would enable the compiler to catch an invalid use of the ! marker in a declaration, just as the proposed implicit constructor does, while conforming better to common Java idiom. My further suggestion is that appending ! to a type should mean that the default initialized value of an instance (all fields zero) is equivalent to null, so that Range![] a = new Range![100]; // allocated with zero values System.out.println(a[5]); // throws NullPointerException (zero fields) This better conforms to current idiom, where the initial initialization is with nulls and the println invocation on a null array element or field throws a NPE. As you say, my suggestion means runtime testing to determine if all fields are zero, which has a performance cost. This will only occur if the JVM implements the ! specification, which it presumably will only do if the object is small. And the cost will be small (I am presuming) relative to savings from allowing the memory footprint to match that of primitives. Am I wrong? There is value in conforming to current idiom. Turning to the LooselyConsistentValue, I withdraw my comments. I mistakenly presumed that its use would be required, which is false. It simply enables a single-threaded (or volatile-protected) application to allow additional inlining, which is harmless. John On Thu, Jan 18, 2024 at 4:56?PM - wrote: > Hi John, > > On Thu, Jan 18, 2024 at 2:30?PM John Bossons wrote: > >> Hi all, >> >> Maybe I am missing something, but the proposal seems to be trying to do >> too much. >> >> Specifically: Why not simply provide that appending ! to a type >> specification for an object (field, array element, or parameter) means that >> that the object is not only null-restricted but also never zero and >> necessarily non-atomic unless small? >> > First, a reminder that some objects cannot be non-atomic, mostly when > fields have dependencies/constraints on each other: if you have a range, > you cannot allow its lower bound to be larger than its upper bound. > Non-atomic representations cannot avoid this pitfall. Also you seem > to misunderstand non-atomic: if an object is non-atomic, each of its fields > can update independently from each other, so a 3-d position can be > non-atomic, but not so for a range. Non-atomicity is dangerous, and it > should not be the default. However, if an atomic class is small enough, > like OptionalInt (as now many architecture has like atomic handling of 16 > bytes etc.) JVM may choose to apply non-atomic optimizations to them for > better performance without violating their object constraints. > >> >> Why complicate the specification with an implicit constructor that a >> developer will never explicitly invoke? Why permit a developer to 'opt in' >> to non-atomic? >> > The implicit constructor can always be called; its existence asks > programmers to affirm that the zero-filled inlined instance is a valid > instance. And this instance is different from a null, as null is a pointer, > yet the zero-instance has a different size defined by the class layout in > the stack/heap. > >> >> Sure, that means trying to read a zero value triggers a NPE. That just >> means that a type that can legitimately have a zero value cannot be >> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >> is the equivalent of a null unrestricted value object. Why go beyond that? >> If a non-null zero value is possible, the type cannot be null-restricted >> and so can only be an unrestricted JEP 401 value type. End of story. >> > You see the inlined zero instance and the null pointer have different > sizes, and thus they are not exchangeable. Converting the inlined zero > instance to null to throw NPE is complex and hurtful to performance as you > will scan unrelated bits for almost every field access. > > And for unrestricted value type, yes, they exist and can possibly be > inlined as well if the restricted type is small enough (i.e. has space for > extra bit indicating nullity) But reminder, the nullity bit itself isn't > even non-atomic with (depends on) the rest of the object! You don't want > the nullity to indicate null while the rest of the object indicate some > sort of non-null value, which can happen in a non-atomic context. > >> >> With respect to non-atomic, what is new? Yes, unexpected instances may >> occur without synchronization if the object is larger than the word size of >> the implementation. Why do we need to extend a LooselyConsistentValue >> interface to know/permit that? >> > Unexpected instances don't occur without synchronization if you use > finals, such as in Java's String or immutable List.of(). These APIs may > capture any "permitted value" from the arrays passed in, but once > constructed, the captured value remains constant no matter which thread > observes the String/List object reference. (Technically, JVM implements > this with a store-store fence between end of field writes in the > constructor and object reference is shared anywhere, and a load-load fence > between object reference read and field read) Value classes is about the > safety of final fields in programming instead of the close encounter of > third kinds of synchronization, volatiles, and fences. > >> >> Can we not keep this 'simple' (if that word has meaning in this context)? >> What am I missing? >> > I think you are missing a bit about how the layout (inlining is > represented in memory) and value classes (the thread safety its final > offers) work, and what "non-atomic" means. Feel free to question more. > >> >> John >> >> >> -- >> Phone: (416) 450-3584 (cell) >> > -- Phone: (416) 450-3584 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From anhmdq at gmail.com Fri Jan 19 16:47:51 2024 From: anhmdq at gmail.com (=?UTF-8?Q?Qu=C3=A2n_Anh_Mai?=) Date: Sat, 20 Jan 2024 00:47:51 +0800 Subject: Fwd: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: I forgot to cc valhalla-dev ---------- Forwarded message --------- From: Qu?n Anh Mai Date: Sat, 20 Jan 2024 at 00:33 Subject: Re: Null-restricted types: Why so complicated? To: John Bossons Hi, > But in Java idiom that means that a developer can invoke the public implicit constructor, which will cause confusion. An implicit constructor can be invoked like any other constructor, and it will return an all-zero instance of the corresponding class. > My further suggestion is that appending ! to a type should mean that the default initialized value of an instance (all fields zero) is equivalent to null, so that > Range![] a = new Range![100]; // allocated with zero values > System.out.println(a[5]); // throws NullPointerException (zero fields) > This better conforms to current idiom, where the initial initialization is with nulls and the println invocation on a null array element or field throws a NPE. What is the value of this proposal? If you want the all-zero instance to be equivalent to null, just do not have any constructor that initializes an instance to that state. The whole point of null-restricted fields/variables is to indicate that the field/variable is always valid. I think you are having some confusion. Null-restriction is the property of a variable/field, i.e. the property of the holder, not of the class itself. The class having implicit constructors simply means that it allows the existence of null-restricted fields/variables. The class can be used as normal with non-null-restricted types. (e.g Range r = null;) Regards, Quan Anh On Sat, 20 Jan 2024 at 00:09, John Bossons wrote: > Thanks for your comments. I was not sufficiently explicit. > > Let me focus on implicit. I guess my dislike is of introducing a 'fake' > constructor into the definition of a class. I say 'fake' because, as I > understand it, the only purpose of the implicit constructor is to indicate > to the JVM/compiler that a never-null instance can be created. But in Java > idiom that means that a developer can invoke the public implicit > constructor, which will cause confusion. > > Maybe it would be better to require a potentially null-restricted class to > extend a marker interface ('extends NeverNullPossible'? Or maybe, looking > ahead to my next comment, 'extends AllZerosIsNull'?). That would enable the > compiler to catch an invalid use of the ! marker in a declaration, just as > the proposed implicit constructor does, while conforming better to common > Java idiom. > > My further suggestion is that appending ! to a type should mean that the > default initialized value of an instance (all fields zero) is equivalent to > null, so that > Range![] a = new Range![100]; // allocated with zero values > System.out.println(a[5]); // throws NullPointerException (zero > fields) > This better conforms to current idiom, where the initial initialization is > with nulls and the println invocation on a null array element or field > throws a NPE. > > As you say, my suggestion means runtime testing to determine if all fields > are zero, which has a performance cost. This will only occur if the JVM > implements the ! specification, which it presumably will only do if the > object is small. And the cost will be small (I am presuming) relative to > savings from allowing the memory footprint to match that of primitives. Am > I wrong? There is value in conforming to current idiom. > > Turning to the LooselyConsistentValue, I withdraw my comments. I > mistakenly presumed that its use would be required, which is false. It > simply enables a single-threaded (or volatile-protected) application to > allow additional inlining, which is harmless. > > John > > On Thu, Jan 18, 2024 at 4:56?PM - wrote: > >> Hi John, >> >> On Thu, Jan 18, 2024 at 2:30?PM John Bossons wrote: >> >>> Hi all, >>> >>> Maybe I am missing something, but the proposal seems to be trying to do >>> too much. >>> >>> Specifically: Why not simply provide that appending ! to a type >>> specification for an object (field, array element, or parameter) means that >>> that the object is not only null-restricted but also never zero and >>> necessarily non-atomic unless small? >>> >> First, a reminder that some objects cannot be non-atomic, mostly when >> fields have dependencies/constraints on each other: if you have a range, >> you cannot allow its lower bound to be larger than its upper bound. >> Non-atomic representations cannot avoid this pitfall. Also you seem >> to misunderstand non-atomic: if an object is non-atomic, each of its fields >> can update independently from each other, so a 3-d position can be >> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >> should not be the default. However, if an atomic class is small enough, >> like OptionalInt (as now many architecture has like atomic handling of 16 >> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >> better performance without violating their object constraints. >> >>> >>> Why complicate the specification with an implicit constructor that a >>> developer will never explicitly invoke? Why permit a developer to 'opt in' >>> to non-atomic? >>> >> The implicit constructor can always be called; its existence asks >> programmers to affirm that the zero-filled inlined instance is a valid >> instance. And this instance is different from a null, as null is a pointer, >> yet the zero-instance has a different size defined by the class layout in >> the stack/heap. >> >>> >>> Sure, that means trying to read a zero value triggers a NPE. That just >>> means that a type that can legitimately have a zero value cannot be >>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>> is the equivalent of a null unrestricted value object. Why go beyond that? >>> If a non-null zero value is possible, the type cannot be null-restricted >>> and so can only be an unrestricted JEP 401 value type. End of story. >>> >> You see the inlined zero instance and the null pointer have different >> sizes, and thus they are not exchangeable. Converting the inlined zero >> instance to null to throw NPE is complex and hurtful to performance as you >> will scan unrelated bits for almost every field access. >> >> And for unrestricted value type, yes, they exist and can possibly be >> inlined as well if the restricted type is small enough (i.e. has space for >> extra bit indicating nullity) But reminder, the nullity bit itself isn't >> even non-atomic with (depends on) the rest of the object! You don't want >> the nullity to indicate null while the rest of the object indicate some >> sort of non-null value, which can happen in a non-atomic context. >> >>> >>> With respect to non-atomic, what is new? Yes, unexpected instances may >>> occur without synchronization if the object is larger than the word size of >>> the implementation. Why do we need to extend a LooselyConsistentValue >>> interface to know/permit that? >>> >> Unexpected instances don't occur without synchronization if you use >> finals, such as in Java's String or immutable List.of(). These APIs may >> capture any "permitted value" from the arrays passed in, but once >> constructed, the captured value remains constant no matter which thread >> observes the String/List object reference. (Technically, JVM implements >> this with a store-store fence between end of field writes in the >> constructor and object reference is shared anywhere, and a load-load fence >> between object reference read and field read) Value classes is about the >> safety of final fields in programming instead of the close encounter of >> third kinds of synchronization, volatiles, and fences. >> >>> >>> Can we not keep this 'simple' (if that word has meaning in this >>> context)? What am I missing? >>> >> I think you are missing a bit about how the layout (inlining is >> represented in memory) and value classes (the thread safety its final >> offers) work, and what "non-atomic" means. Feel free to question more. >> >>> >>> John >>> >>> >>> -- >>> Phone: (416) 450-3584 (cell) >>> >> > > -- > Phone: (416) 450-3584 (cell) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbossons at gmail.com Fri Jan 19 17:34:00 2024 From: jbossons at gmail.com (John Bossons) Date: Fri, 19 Jan 2024 12:34:00 -0500 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Hi Quan Anh, We're talking past each other. ME: But in Java idiom that means that a developer can invoke the public implicit constructor, which will cause confusion. YOU: An implicit constructor can be invoked like any other constructor, and it will return an all-zero instance of the corresponding class. Precisely. Which will often not be valid in the application context. It should be possible for a constructor to exclude an all-zero instance, such as a zero-length Range or an all-null Name (to use two examples in the draft spec) without giving up the ability to specify that it is null-restricted. Or for the 'real' constructor to be private, invoked from a factory method, which is effectively made useless as a protective feature if a public constructor is also provided. (If the implicit constructor could be specified as private, that would take care of the problem. Extending a marker interface is simpler.) ME: > My further suggestion is that appending ! to a type should mean that the default initialized value of an instance (all fields zero) is equivalent to null, so that > Range![] a = new Range![100]; // allocated with zero values > System.out.println(a[5]); // throws NullPointerException (zero fields) > This better conforms to current idiom, where the initial initialization is with nulls and the println invocation on a null array element or field throws a NPE. YOU: What is the value of this proposal? If you want the all-zero instance to be equivalent to null, just do not have any constructor that initializes an instance to that state. The whole point of null-restricted fields/variables is to indicate that the field/variable is always valid. The issue here is not what the constructor does -- it hasn't been invoked yet for the element on which println is invoked -- but rather that the fact that the array element is undefined should be capable of being caught. On Fri, Jan 19, 2024 at 11:48?AM Qu?n Anh Mai wrote: > I forgot to cc valhalla-dev > > ---------- Forwarded message --------- > From: Qu?n Anh Mai > Date: Sat, 20 Jan 2024 at 00:33 > Subject: Re: Null-restricted types: Why so complicated? > To: John Bossons > > > Hi, > > > But in Java idiom that means that a developer can invoke the public > implicit constructor, which will cause confusion. > > An implicit constructor can be invoked like any other constructor, and it > will return an all-zero instance of the corresponding class. > > > My further suggestion is that appending ! to a type should mean that the > default initialized value of an instance (all fields zero) is equivalent to > null, so that > > Range![] a = new Range![100]; // allocated with zero values > > System.out.println(a[5]); // throws NullPointerException > (zero fields) > > This better conforms to current idiom, where the initial initialization > is with nulls and the println invocation on a null array element or field > throws a NPE. > > What is the value of this proposal? If you want the all-zero instance to > be equivalent to null, just do not have any constructor that initializes an > instance to that state. The whole point of null-restricted fields/variables > is to indicate that the field/variable is always valid. > > I think you are having some confusion. Null-restriction is the property of > a variable/field, i.e. the property of the holder, not of the class itself. > The class having implicit constructors simply means that it allows the > existence of null-restricted fields/variables. The class can be used as > normal with non-null-restricted types. (e.g Range r = null;) > > Regards, > Quan Anh > > On Sat, 20 Jan 2024 at 00:09, John Bossons wrote: > >> Thanks for your comments. I was not sufficiently explicit. >> >> Let me focus on implicit. I guess my dislike is of introducing a 'fake' >> constructor into the definition of a class. I say 'fake' because, as I >> understand it, the only purpose of the implicit constructor is to indicate >> to the JVM/compiler that a never-null instance can be created. But in Java >> idiom that means that a developer can invoke the public implicit >> constructor, which will cause confusion. >> >> Maybe it would be better to require a potentially null-restricted class >> to extend a marker interface ('extends NeverNullPossible'? Or maybe, >> looking ahead to my next comment, 'extends AllZerosIsNull'?). That would >> enable the compiler to catch an invalid use of the ! marker in a >> declaration, just as the proposed implicit constructor does, while >> conforming better to common Java idiom. >> >> My further suggestion is that appending ! to a type should mean that the >> default initialized value of an instance (all fields zero) is equivalent to >> null, so that >> Range![] a = new Range![100]; // allocated with zero values >> System.out.println(a[5]); // throws NullPointerException (zero >> fields) >> This better conforms to current idiom, where the initial initialization >> is with nulls and the println invocation on a null array element or field >> throws a NPE. >> >> As you say, my suggestion means runtime testing to determine if all >> fields are zero, which has a performance cost. This will only occur if the >> JVM implements the ! specification, which it presumably will only do if the >> object is small. And the cost will be small (I am presuming) relative to >> savings from allowing the memory footprint to match that of primitives. Am >> I wrong? There is value in conforming to current idiom. >> >> Turning to the LooselyConsistentValue, I withdraw my comments. I >> mistakenly presumed that its use would be required, which is false. It >> simply enables a single-threaded (or volatile-protected) application to >> allow additional inlining, which is harmless. >> >> John >> >> On Thu, Jan 18, 2024 at 4:56?PM - wrote: >> >>> Hi John, >>> >>> On Thu, Jan 18, 2024 at 2:30?PM John Bossons wrote: >>> >>>> Hi all, >>>> >>>> Maybe I am missing something, but the proposal seems to be trying to do >>>> too much. >>>> >>>> Specifically: Why not simply provide that appending ! to a type >>>> specification for an object (field, array element, or parameter) means that >>>> that the object is not only null-restricted but also never zero and >>>> necessarily non-atomic unless small? >>>> >>> First, a reminder that some objects cannot be non-atomic, mostly when >>> fields have dependencies/constraints on each other: if you have a range, >>> you cannot allow its lower bound to be larger than its upper bound. >>> Non-atomic representations cannot avoid this pitfall. Also you seem >>> to misunderstand non-atomic: if an object is non-atomic, each of its fields >>> can update independently from each other, so a 3-d position can be >>> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >>> should not be the default. However, if an atomic class is small enough, >>> like OptionalInt (as now many architecture has like atomic handling of 16 >>> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >>> better performance without violating their object constraints. >>> >>>> >>>> Why complicate the specification with an implicit constructor that a >>>> developer will never explicitly invoke? Why permit a developer to 'opt in' >>>> to non-atomic? >>>> >>> The implicit constructor can always be called; its existence asks >>> programmers to affirm that the zero-filled inlined instance is a valid >>> instance. And this instance is different from a null, as null is a pointer, >>> yet the zero-instance has a different size defined by the class layout in >>> the stack/heap. >>> >>>> >>>> Sure, that means trying to read a zero value triggers a NPE. That just >>>> means that a type that can legitimately have a zero value cannot be >>>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>>> is the equivalent of a null unrestricted value object. Why go beyond that? >>>> If a non-null zero value is possible, the type cannot be null-restricted >>>> and so can only be an unrestricted JEP 401 value type. End of story. >>>> >>> You see the inlined zero instance and the null pointer have different >>> sizes, and thus they are not exchangeable. Converting the inlined zero >>> instance to null to throw NPE is complex and hurtful to performance as you >>> will scan unrelated bits for almost every field access. >>> >>> And for unrestricted value type, yes, they exist and can possibly be >>> inlined as well if the restricted type is small enough (i.e. has space for >>> extra bit indicating nullity) But reminder, the nullity bit itself isn't >>> even non-atomic with (depends on) the rest of the object! You don't want >>> the nullity to indicate null while the rest of the object indicate some >>> sort of non-null value, which can happen in a non-atomic context. >>> >>>> >>>> With respect to non-atomic, what is new? Yes, unexpected instances may >>>> occur without synchronization if the object is larger than the word size of >>>> the implementation. Why do we need to extend a LooselyConsistentValue >>>> interface to know/permit that? >>>> >>> Unexpected instances don't occur without synchronization if you use >>> finals, such as in Java's String or immutable List.of(). These APIs may >>> capture any "permitted value" from the arrays passed in, but once >>> constructed, the captured value remains constant no matter which thread >>> observes the String/List object reference. (Technically, JVM implements >>> this with a store-store fence between end of field writes in the >>> constructor and object reference is shared anywhere, and a load-load fence >>> between object reference read and field read) Value classes is about the >>> safety of final fields in programming instead of the close encounter of >>> third kinds of synchronization, volatiles, and fences. >>> >>>> >>>> Can we not keep this 'simple' (if that word has meaning in this >>>> context)? What am I missing? >>>> >>> I think you are missing a bit about how the layout (inlining is >>> represented in memory) and value classes (the thread safety its final >>> offers) work, and what "non-atomic" means. Feel free to question more. >>> >>>> >>>> John >>>> >>>> >>>> -- >>>> Phone: (416) 450-3584 (cell) >>>> >>> >> >> -- >> Phone: (416) 450-3584 (cell) >> > -- Phone: (416) 450-3584 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From anhmdq at gmail.com Fri Jan 19 17:43:34 2024 From: anhmdq at gmail.com (=?UTF-8?Q?Qu=C3=A2n_Anh_Mai?=) Date: Sat, 20 Jan 2024 01:43:34 +0800 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Why not just Range[] a = new Range[100]; // allocate with null values System.out.println(a[5]); // NullPointerException By using Range![] you accept that a zero value is acceptable, similar to how an int[] works. If you do not want the uninitialized values to be usable then do not use null-restricted type. Cheer, Quan Anh On Sat, 20 Jan 2024 at 01:34, John Bossons wrote: > Hi Quan Anh, > > We're talking past each other. > > ME: But in Java idiom that means that a developer can invoke the public > implicit constructor, which will cause confusion. > YOU: An implicit constructor can be invoked like any other constructor, > and it will return an all-zero instance of the corresponding class. > > > Precisely. Which will often not be valid in the application context. It > should be possible for a constructor to exclude an all-zero instance, such > as a zero-length Range or an all-null Name (to use two examples in the > draft spec) without giving up the ability to specify that it is > null-restricted. Or for the 'real' constructor to be private, invoked from > a factory method, which is effectively made useless as a protective feature > if a public constructor is also provided. (If the implicit constructor > could be specified as private, that would take care of the problem. > Extending a marker interface is simpler.) > > ME: > My further suggestion is that appending ! to a type should mean > that the default initialized value of an instance (all fields zero) is > equivalent to null, so that > > Range![] a = new Range![100]; // allocated with zero values > > System.out.println(a[5]); // throws NullPointerException > (zero fields) > > This better conforms to current idiom, where the initial initialization > is with nulls and the println invocation on a null array element or field > throws a NPE. > YOU: What is the value of this proposal? If you want the all-zero > instance to be equivalent to null, just do not have any constructor that > initializes an instance to that state. The whole point of null-restricted > fields/variables is to indicate that the field/variable is always valid. > > > The issue here is not what the constructor does -- it hasn't been invoked > yet for the element on which println is invoked -- but rather that the fact > that the array element is undefined should be capable of being caught. > > On Fri, Jan 19, 2024 at 11:48?AM Qu?n Anh Mai wrote: > >> I forgot to cc valhalla-dev >> >> ---------- Forwarded message --------- >> From: Qu?n Anh Mai >> Date: Sat, 20 Jan 2024 at 00:33 >> Subject: Re: Null-restricted types: Why so complicated? >> To: John Bossons >> >> >> Hi, >> >> > But in Java idiom that means that a developer can invoke the public >> implicit constructor, which will cause confusion. >> >> An implicit constructor can be invoked like any other constructor, and it >> will return an all-zero instance of the corresponding class. >> >> > My further suggestion is that appending ! to a type should mean that >> the default initialized value of an instance (all fields zero) is >> equivalent to null, so that >> > Range![] a = new Range![100]; // allocated with zero values >> > System.out.println(a[5]); // throws NullPointerException >> (zero fields) >> > This better conforms to current idiom, where the initial initialization >> is with nulls and the println invocation on a null array element or field >> throws a NPE. >> >> What is the value of this proposal? If you want the all-zero instance to >> be equivalent to null, just do not have any constructor that initializes an >> instance to that state. The whole point of null-restricted fields/variables >> is to indicate that the field/variable is always valid. >> >> I think you are having some confusion. Null-restriction is the property >> of a variable/field, i.e. the property of the holder, not of the class >> itself. The class having implicit constructors simply means that it allows >> the existence of null-restricted fields/variables. The class can be used as >> normal with non-null-restricted types. (e.g Range r = null;) >> >> Regards, >> Quan Anh >> >> On Sat, 20 Jan 2024 at 00:09, John Bossons wrote: >> >>> Thanks for your comments. I was not sufficiently explicit. >>> >>> Let me focus on implicit. I guess my dislike is of introducing a 'fake' >>> constructor into the definition of a class. I say 'fake' because, as I >>> understand it, the only purpose of the implicit constructor is to indicate >>> to the JVM/compiler that a never-null instance can be created. But in Java >>> idiom that means that a developer can invoke the public implicit >>> constructor, which will cause confusion. >>> >>> Maybe it would be better to require a potentially null-restricted class >>> to extend a marker interface ('extends NeverNullPossible'? Or maybe, >>> looking ahead to my next comment, 'extends AllZerosIsNull'?). That would >>> enable the compiler to catch an invalid use of the ! marker in a >>> declaration, just as the proposed implicit constructor does, while >>> conforming better to common Java idiom. >>> >>> My further suggestion is that appending ! to a type should mean that the >>> default initialized value of an instance (all fields zero) is equivalent to >>> null, so that >>> Range![] a = new Range![100]; // allocated with zero values >>> System.out.println(a[5]); // throws NullPointerException >>> (zero fields) >>> This better conforms to current idiom, where the initial initialization >>> is with nulls and the println invocation on a null array element or field >>> throws a NPE. >>> >>> As you say, my suggestion means runtime testing to determine if all >>> fields are zero, which has a performance cost. This will only occur if the >>> JVM implements the ! specification, which it presumably will only do if the >>> object is small. And the cost will be small (I am presuming) relative to >>> savings from allowing the memory footprint to match that of primitives. Am >>> I wrong? There is value in conforming to current idiom. >>> >>> Turning to the LooselyConsistentValue, I withdraw my comments. I >>> mistakenly presumed that its use would be required, which is false. It >>> simply enables a single-threaded (or volatile-protected) application to >>> allow additional inlining, which is harmless. >>> >>> John >>> >>> On Thu, Jan 18, 2024 at 4:56?PM - wrote: >>> >>>> Hi John, >>>> >>>> On Thu, Jan 18, 2024 at 2:30?PM John Bossons >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> Maybe I am missing something, but the proposal seems to be trying to >>>>> do too much. >>>>> >>>>> Specifically: Why not simply provide that appending ! to a type >>>>> specification for an object (field, array element, or parameter) means that >>>>> that the object is not only null-restricted but also never zero and >>>>> necessarily non-atomic unless small? >>>>> >>>> First, a reminder that some objects cannot be non-atomic, mostly when >>>> fields have dependencies/constraints on each other: if you have a range, >>>> you cannot allow its lower bound to be larger than its upper bound. >>>> Non-atomic representations cannot avoid this pitfall. Also you seem >>>> to misunderstand non-atomic: if an object is non-atomic, each of its fields >>>> can update independently from each other, so a 3-d position can be >>>> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >>>> should not be the default. However, if an atomic class is small enough, >>>> like OptionalInt (as now many architecture has like atomic handling of 16 >>>> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >>>> better performance without violating their object constraints. >>>> >>>>> >>>>> Why complicate the specification with an implicit constructor that a >>>>> developer will never explicitly invoke? Why permit a developer to 'opt in' >>>>> to non-atomic? >>>>> >>>> The implicit constructor can always be called; its existence asks >>>> programmers to affirm that the zero-filled inlined instance is a valid >>>> instance. And this instance is different from a null, as null is a pointer, >>>> yet the zero-instance has a different size defined by the class layout in >>>> the stack/heap. >>>> >>>>> >>>>> Sure, that means trying to read a zero value triggers a NPE. That just >>>>> means that a type that can legitimately have a zero value cannot be >>>>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>>>> is the equivalent of a null unrestricted value object. Why go beyond that? >>>>> If a non-null zero value is possible, the type cannot be null-restricted >>>>> and so can only be an unrestricted JEP 401 value type. End of story. >>>>> >>>> You see the inlined zero instance and the null pointer have different >>>> sizes, and thus they are not exchangeable. Converting the inlined zero >>>> instance to null to throw NPE is complex and hurtful to performance as you >>>> will scan unrelated bits for almost every field access. >>>> >>>> And for unrestricted value type, yes, they exist and can possibly be >>>> inlined as well if the restricted type is small enough (i.e. has space for >>>> extra bit indicating nullity) But reminder, the nullity bit itself isn't >>>> even non-atomic with (depends on) the rest of the object! You don't want >>>> the nullity to indicate null while the rest of the object indicate some >>>> sort of non-null value, which can happen in a non-atomic context. >>>> >>>>> >>>>> With respect to non-atomic, what is new? Yes, unexpected instances >>>>> may occur without synchronization if the object is larger than the word >>>>> size of the implementation. Why do we need to extend a >>>>> LooselyConsistentValue interface to know/permit that? >>>>> >>>> Unexpected instances don't occur without synchronization if you use >>>> finals, such as in Java's String or immutable List.of(). These APIs may >>>> capture any "permitted value" from the arrays passed in, but once >>>> constructed, the captured value remains constant no matter which thread >>>> observes the String/List object reference. (Technically, JVM implements >>>> this with a store-store fence between end of field writes in the >>>> constructor and object reference is shared anywhere, and a load-load fence >>>> between object reference read and field read) Value classes is about the >>>> safety of final fields in programming instead of the close encounter of >>>> third kinds of synchronization, volatiles, and fences. >>>> >>>>> >>>>> Can we not keep this 'simple' (if that word has meaning in this >>>>> context)? What am I missing? >>>>> >>>> I think you are missing a bit about how the layout (inlining is >>>> represented in memory) and value classes (the thread safety its final >>>> offers) work, and what "non-atomic" means. Feel free to question more. >>>> >>>>> >>>>> John >>>>> >>>>> >>>>> -- >>>>> Phone: (416) 450-3584 (cell) >>>>> >>>> >>> >>> -- >>> Phone: (416) 450-3584 (cell) >>> >> > > -- > Phone: (416) 450-3584 (cell) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liangchenblue at gmail.com Fri Jan 19 17:51:56 2024 From: liangchenblue at gmail.com (-) Date: Fri, 19 Jan 2024 11:51:56 -0600 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: On Fri, Jan 19, 2024 at 10:07?AM John Bossons wrote: > Thanks for your comments. I was not sufficiently explicit. > > Let me focus on implicit. I guess my dislike is of introducing a 'fake' > constructor into the definition of a class. I say 'fake' because, as I > understand it, the only purpose of the implicit constructor is to indicate > to the JVM/compiler that a never-null instance can be created. But in Java > idiom that means that a developer can invoke the public implicit > constructor, which will cause confusion. > It is a real constructor like that default no-arg one generated by javac, and developers always CAN invoke that constructor. It is, however, a watered-down version, because our existing default one can perform side effects like `private List values = new ArrayList<>();` injected to the end of constructor, while the implicit one must give it up so that JVM can construct zero instances cheaply yet correctly. As a result, this constructor cannot declare any side-effect code or declare a custom superconstrctor call, so it will look like a "fake" one, yet it is no different from a true constructor. > > Maybe it would be better to require a potentially null-restricted class to > extend a marker interface ('extends NeverNullPossible'? Or maybe, looking > ahead to my next comment, 'extends AllZerosIsNull'?). That would enable the > compiler to catch an invalid use of the ! marker in a declaration, just as > the proposed implicit constructor does, while conforming better to common > Java idiom. > A zero instance is a class capacity indeed, and such classes must be final. My only cent against marker interfaces is that I don't think Java compiler ever emits errors simply because your class implements an unsuitable interface. > > My further suggestion is that appending ! to a type should mean that the > default initialized value of an instance (all fields zero) is equivalent to > null, so that > Range![] a = new Range![100]; // allocated with zero values > System.out.println(a[5]); // throws NullPointerException (zero > fields) > This better conforms to current idiom, where the initial initialization is > with nulls and the println invocation on a null array element or field > throws a NPE. > Consider this: how would you differentiate a null Range versus a Range[0, 0]? Are both all zero bits? This is where the zero instance starts: before anything, the zero instance has always been a VALID instance of an object, yet its inlined representation will be all zero bits, which means it will coincide with null; thus, we introduce the null-restricted concept to avoid the performance pitfalls we will suffer to represent a null. Also adding on to Anh Mai's comment, recall that Range is a value class (a prerequisite to null-restriction) so its identity doesn't matter; the VM is totally permitted inline the null-friendly range array with 9-byte units (8 byte + single bit indicating nullity), and it is still somewhat a memory win over linking to regular objects. But we might need some fine-grained control to ensure VM allocates an inlined array instead of a pointer array in this case. In this case, testing null and throwing NPE would be simply checking one bit, which is more reliable than scanning a whole byte interval too. > > As you say, my suggestion means runtime testing to determine if all fields > are zero, which has a performance cost. This will only occur if the JVM > implements the ! specification, which it presumably will only do if the > object is small. And the cost will be small (I am presuming) relative to > savings from allowing the memory footprint to match that of primitives. Am > I wrong? There is value in conforming to current idiom. > > Turning to the LooselyConsistentValue, I withdraw my comments. I > mistakenly presumed that its use would be required, which is false. It > simply enables a single-threaded (or volatile-protected) application to > allow additional inlining, which is harmless. > > John > > On Thu, Jan 18, 2024 at 4:56?PM - wrote: > >> Hi John, >> >> On Thu, Jan 18, 2024 at 2:30?PM John Bossons wrote: >> >>> Hi all, >>> >>> Maybe I am missing something, but the proposal seems to be trying to do >>> too much. >>> >>> Specifically: Why not simply provide that appending ! to a type >>> specification for an object (field, array element, or parameter) means that >>> that the object is not only null-restricted but also never zero and >>> necessarily non-atomic unless small? >>> >> First, a reminder that some objects cannot be non-atomic, mostly when >> fields have dependencies/constraints on each other: if you have a range, >> you cannot allow its lower bound to be larger than its upper bound. >> Non-atomic representations cannot avoid this pitfall. Also you seem >> to misunderstand non-atomic: if an object is non-atomic, each of its fields >> can update independently from each other, so a 3-d position can be >> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >> should not be the default. However, if an atomic class is small enough, >> like OptionalInt (as now many architecture has like atomic handling of 16 >> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >> better performance without violating their object constraints. >> >>> >>> Why complicate the specification with an implicit constructor that a >>> developer will never explicitly invoke? Why permit a developer to 'opt in' >>> to non-atomic? >>> >> The implicit constructor can always be called; its existence asks >> programmers to affirm that the zero-filled inlined instance is a valid >> instance. And this instance is different from a null, as null is a pointer, >> yet the zero-instance has a different size defined by the class layout in >> the stack/heap. >> >>> >>> Sure, that means trying to read a zero value triggers a NPE. That just >>> means that a type that can legitimately have a zero value cannot be >>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>> is the equivalent of a null unrestricted value object. Why go beyond that? >>> If a non-null zero value is possible, the type cannot be null-restricted >>> and so can only be an unrestricted JEP 401 value type. End of story. >>> >> You see the inlined zero instance and the null pointer have different >> sizes, and thus they are not exchangeable. Converting the inlined zero >> instance to null to throw NPE is complex and hurtful to performance as you >> will scan unrelated bits for almost every field access. >> >> And for unrestricted value type, yes, they exist and can possibly be >> inlined as well if the restricted type is small enough (i.e. has space for >> extra bit indicating nullity) But reminder, the nullity bit itself isn't >> even non-atomic with (depends on) the rest of the object! You don't want >> the nullity to indicate null while the rest of the object indicate some >> sort of non-null value, which can happen in a non-atomic context. >> >>> >>> With respect to non-atomic, what is new? Yes, unexpected instances may >>> occur without synchronization if the object is larger than the word size of >>> the implementation. Why do we need to extend a LooselyConsistentValue >>> interface to know/permit that? >>> >> Unexpected instances don't occur without synchronization if you use >> finals, such as in Java's String or immutable List.of(). These APIs may >> capture any "permitted value" from the arrays passed in, but once >> constructed, the captured value remains constant no matter which thread >> observes the String/List object reference. (Technically, JVM implements >> this with a store-store fence between end of field writes in the >> constructor and object reference is shared anywhere, and a load-load fence >> between object reference read and field read) Value classes is about the >> safety of final fields in programming instead of the close encounter of >> third kinds of synchronization, volatiles, and fences. >> >>> >>> Can we not keep this 'simple' (if that word has meaning in this >>> context)? What am I missing? >>> >> I think you are missing a bit about how the layout (inlining is >> represented in memory) and value classes (the thread safety its final >> offers) work, and what "non-atomic" means. Feel free to question more. >> >>> >>> John >>> >>> >>> -- >>> Phone: (416) 450-3584 (cell) >>> >> > > -- > Phone: (416) 450-3584 (cell) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbossons at gmail.com Fri Jan 19 18:05:47 2024 From: jbossons at gmail.com (John Bossons) Date: Fri, 19 Jan 2024 13:05:47 -0500 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Hi again, What you suggest is one solution. My counter: It's unsafe. The JVM should be able to protect a developer from invoking doSomethingWith(a[5]) on an undefined array element (or method parameter). On Fri, Jan 19, 2024 at 12:43?PM Qu?n Anh Mai wrote: > Why not just > > Range[] a = new Range[100]; // allocate with null values > System.out.println(a[5]); // NullPointerException > > By using Range![] you accept that a zero value is acceptable, similar to > how an int[] works. If you do not want the uninitialized values to be > usable then do not use null-restricted type. > > Cheer, > Quan Anh > > On Sat, 20 Jan 2024 at 01:34, John Bossons wrote: > >> Hi Quan Anh, >> >> We're talking past each other. >> >> ME: But in Java idiom that means that a developer can invoke the public >> implicit constructor, which will cause confusion. >> YOU: An implicit constructor can be invoked like any other constructor, >> and it will return an all-zero instance of the corresponding class. >> >> >> Precisely. Which will often not be valid in the application context. It >> should be possible for a constructor to exclude an all-zero instance, such >> as a zero-length Range or an all-null Name (to use two examples in the >> draft spec) without giving up the ability to specify that it is >> null-restricted. Or for the 'real' constructor to be private, invoked from >> a factory method, which is effectively made useless as a protective feature >> if a public constructor is also provided. (If the implicit constructor >> could be specified as private, that would take care of the problem. >> Extending a marker interface is simpler.) >> >> ME: > My further suggestion is that appending ! to a type should mean >> that the default initialized value of an instance (all fields zero) is >> equivalent to null, so that >> > Range![] a = new Range![100]; // allocated with zero values >> > System.out.println(a[5]); // throws NullPointerException >> (zero fields) >> > This better conforms to current idiom, where the initial initialization >> is with nulls and the println invocation on a null array element or field >> throws a NPE. >> YOU: What is the value of this proposal? If you want the all-zero >> instance to be equivalent to null, just do not have any constructor that >> initializes an instance to that state. The whole point of null-restricted >> fields/variables is to indicate that the field/variable is always valid. >> >> >> The issue here is not what the constructor does -- it hasn't been invoked >> yet for the element on which println is invoked -- but rather that the fact >> that the array element is undefined should be capable of being caught. >> >> On Fri, Jan 19, 2024 at 11:48?AM Qu?n Anh Mai wrote: >> >>> I forgot to cc valhalla-dev >>> >>> ---------- Forwarded message --------- >>> From: Qu?n Anh Mai >>> Date: Sat, 20 Jan 2024 at 00:33 >>> Subject: Re: Null-restricted types: Why so complicated? >>> To: John Bossons >>> >>> >>> Hi, >>> >>> > But in Java idiom that means that a developer can invoke the public >>> implicit constructor, which will cause confusion. >>> >>> An implicit constructor can be invoked like any other constructor, and >>> it will return an all-zero instance of the corresponding class. >>> >>> > My further suggestion is that appending ! to a type should mean that >>> the default initialized value of an instance (all fields zero) is >>> equivalent to null, so that >>> > Range![] a = new Range![100]; // allocated with zero values >>> > System.out.println(a[5]); // throws NullPointerException >>> (zero fields) >>> > This better conforms to current idiom, where the initial >>> initialization is with nulls and the println invocation on a null array >>> element or field throws a NPE. >>> >>> What is the value of this proposal? If you want the all-zero instance to >>> be equivalent to null, just do not have any constructor that initializes an >>> instance to that state. The whole point of null-restricted fields/variables >>> is to indicate that the field/variable is always valid. >>> >>> I think you are having some confusion. Null-restriction is the property >>> of a variable/field, i.e. the property of the holder, not of the class >>> itself. The class having implicit constructors simply means that it allows >>> the existence of null-restricted fields/variables. The class can be used as >>> normal with non-null-restricted types. (e.g Range r = null;) >>> >>> Regards, >>> Quan Anh >>> >>> On Sat, 20 Jan 2024 at 00:09, John Bossons wrote: >>> >>>> Thanks for your comments. I was not sufficiently explicit. >>>> >>>> Let me focus on implicit. I guess my dislike is of introducing a 'fake' >>>> constructor into the definition of a class. I say 'fake' because, as I >>>> understand it, the only purpose of the implicit constructor is to indicate >>>> to the JVM/compiler that a never-null instance can be created. But in Java >>>> idiom that means that a developer can invoke the public implicit >>>> constructor, which will cause confusion. >>>> >>>> Maybe it would be better to require a potentially null-restricted class >>>> to extend a marker interface ('extends NeverNullPossible'? Or maybe, >>>> looking ahead to my next comment, 'extends AllZerosIsNull'?). That would >>>> enable the compiler to catch an invalid use of the ! marker in a >>>> declaration, just as the proposed implicit constructor does, while >>>> conforming better to common Java idiom. >>>> >>>> My further suggestion is that appending ! to a type should mean that >>>> the default initialized value of an instance (all fields zero) is >>>> equivalent to null, so that >>>> Range![] a = new Range![100]; // allocated with zero values >>>> System.out.println(a[5]); // throws NullPointerException >>>> (zero fields) >>>> This better conforms to current idiom, where the initial initialization >>>> is with nulls and the println invocation on a null array element or field >>>> throws a NPE. >>>> >>>> As you say, my suggestion means runtime testing to determine if all >>>> fields are zero, which has a performance cost. This will only occur if the >>>> JVM implements the ! specification, which it presumably will only do if the >>>> object is small. And the cost will be small (I am presuming) relative to >>>> savings from allowing the memory footprint to match that of primitives. Am >>>> I wrong? There is value in conforming to current idiom. >>>> >>>> Turning to the LooselyConsistentValue, I withdraw my comments. I >>>> mistakenly presumed that its use would be required, which is false. It >>>> simply enables a single-threaded (or volatile-protected) application to >>>> allow additional inlining, which is harmless. >>>> >>>> John >>>> >>>> On Thu, Jan 18, 2024 at 4:56?PM - wrote: >>>> >>>>> Hi John, >>>>> >>>>> On Thu, Jan 18, 2024 at 2:30?PM John Bossons >>>>> wrote: >>>>> >>>>>> Hi all, >>>>>> >>>>>> Maybe I am missing something, but the proposal seems to be trying to >>>>>> do too much. >>>>>> >>>>>> Specifically: Why not simply provide that appending ! to a type >>>>>> specification for an object (field, array element, or parameter) means that >>>>>> that the object is not only null-restricted but also never zero and >>>>>> necessarily non-atomic unless small? >>>>>> >>>>> First, a reminder that some objects cannot be non-atomic, mostly when >>>>> fields have dependencies/constraints on each other: if you have a range, >>>>> you cannot allow its lower bound to be larger than its upper bound. >>>>> Non-atomic representations cannot avoid this pitfall. Also you seem >>>>> to misunderstand non-atomic: if an object is non-atomic, each of its fields >>>>> can update independently from each other, so a 3-d position can be >>>>> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >>>>> should not be the default. However, if an atomic class is small enough, >>>>> like OptionalInt (as now many architecture has like atomic handling of 16 >>>>> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >>>>> better performance without violating their object constraints. >>>>> >>>>>> >>>>>> Why complicate the specification with an implicit constructor that a >>>>>> developer will never explicitly invoke? Why permit a developer to 'opt in' >>>>>> to non-atomic? >>>>>> >>>>> The implicit constructor can always be called; its existence asks >>>>> programmers to affirm that the zero-filled inlined instance is a valid >>>>> instance. And this instance is different from a null, as null is a pointer, >>>>> yet the zero-instance has a different size defined by the class layout in >>>>> the stack/heap. >>>>> >>>>>> >>>>>> Sure, that means trying to read a zero value triggers a NPE. That >>>>>> just means that a type that can legitimately have a zero value cannot be >>>>>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>>>>> is the equivalent of a null unrestricted value object. Why go beyond that? >>>>>> If a non-null zero value is possible, the type cannot be null-restricted >>>>>> and so can only be an unrestricted JEP 401 value type. End of story. >>>>>> >>>>> You see the inlined zero instance and the null pointer have different >>>>> sizes, and thus they are not exchangeable. Converting the inlined zero >>>>> instance to null to throw NPE is complex and hurtful to performance as you >>>>> will scan unrelated bits for almost every field access. >>>>> >>>>> And for unrestricted value type, yes, they exist and can possibly be >>>>> inlined as well if the restricted type is small enough (i.e. has space for >>>>> extra bit indicating nullity) But reminder, the nullity bit itself isn't >>>>> even non-atomic with (depends on) the rest of the object! You don't want >>>>> the nullity to indicate null while the rest of the object indicate some >>>>> sort of non-null value, which can happen in a non-atomic context. >>>>> >>>>>> >>>>>> With respect to non-atomic, what is new? Yes, unexpected instances >>>>>> may occur without synchronization if the object is larger than the word >>>>>> size of the implementation. Why do we need to extend a >>>>>> LooselyConsistentValue interface to know/permit that? >>>>>> >>>>> Unexpected instances don't occur without synchronization if you use >>>>> finals, such as in Java's String or immutable List.of(). These APIs may >>>>> capture any "permitted value" from the arrays passed in, but once >>>>> constructed, the captured value remains constant no matter which thread >>>>> observes the String/List object reference. (Technically, JVM implements >>>>> this with a store-store fence between end of field writes in the >>>>> constructor and object reference is shared anywhere, and a load-load fence >>>>> between object reference read and field read) Value classes is about the >>>>> safety of final fields in programming instead of the close encounter of >>>>> third kinds of synchronization, volatiles, and fences. >>>>> >>>>>> >>>>>> Can we not keep this 'simple' (if that word has meaning in this >>>>>> context)? What am I missing? >>>>>> >>>>> I think you are missing a bit about how the layout (inlining is >>>>> represented in memory) and value classes (the thread safety its final >>>>> offers) work, and what "non-atomic" means. Feel free to question more. >>>>> >>>>>> >>>>>> John >>>>>> >>>>>> >>>>>> -- >>>>>> Phone: (416) 450-3584 (cell) >>>>>> >>>>> >>>> >>>> -- >>>> Phone: (416) 450-3584 (cell) >>>> >>> >> >> -- >> Phone: (416) 450-3584 (cell) >> > -- Phone: (416) 450-3584 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pedro.lamarao at prodist.com.br Fri Jan 19 18:22:59 2024 From: pedro.lamarao at prodist.com.br (=?UTF-8?Q?Pedro_Lamar=C3=A3o?=) Date: Fri, 19 Jan 2024 15:22:59 -0300 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Em sex., 19 de jan. de 2024 ?s 14:53, - escreveu: > On Fri, Jan 19, 2024 at 10:07?AM John Bossons wrote: > >> Thanks for your comments. I was not sufficiently explicit. >> >> Let me focus on implicit. I guess my dislike is of introducing a 'fake' >> constructor into the definition of a class. I say 'fake' because, as I >> understand it, the only purpose of the implicit constructor is to indicate >> to the JVM/compiler that a never-null instance can be created. But in Java >> idiom that means that a developer can invoke the public implicit >> constructor, which will cause confusion. >> > > It is a real constructor like that default no-arg one generated by javac, > and developers always CAN invoke that constructor. It is, however, a > watered-down version, because our existing default one can perform side > effects like `private List values = new ArrayList<>();` injected > to the end of constructor, while the implicit one must give it up so that > JVM can construct zero instances cheaply yet correctly. As a result, this > constructor cannot declare any side-effect code or declare a custom > superconstrctor call, so it will look like a "fake" one, yet it is no > different from a true constructor. > C++ has the following notation for a compiler-generated "default" constructor. It has been in place for some years and has been well received. The meaning of the existing C++ "default" and the meaning of the proposed Java "implicit" is very similar. I am not aware of the meaning of "default" being particularly difficult to teach. class foo { foo () = default; }; -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From anhmdq at gmail.com Fri Jan 19 18:23:16 2024 From: anhmdq at gmail.com (=?UTF-8?Q?Qu=C3=A2n_Anh_Mai?=) Date: Sat, 20 Jan 2024 02:23:16 +0800 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Hi, I still do not understand your question. By declaring an implicit constructor, you are indicating that the all-zero instance is usable, and by using Range![], you are telling Java that the all-zero instance is usable in this particular context. Why does the JVM need to protect you from invoking methods using an instance that you are specifically indicating as valid? Cheer, Quan Anh On Sat, 20 Jan 2024 at 02:05, John Bossons wrote: > Hi again, > > What you suggest is one solution. My counter: It's unsafe. The JVM should > be able to protect a developer from invoking doSomethingWith(a[5]) on an > undefined array element (or method parameter). > > On Fri, Jan 19, 2024 at 12:43?PM Qu?n Anh Mai wrote: > >> Why not just >> >> Range[] a = new Range[100]; // allocate with null values >> System.out.println(a[5]); // NullPointerException >> >> By using Range![] you accept that a zero value is acceptable, similar to >> how an int[] works. If you do not want the uninitialized values to be >> usable then do not use null-restricted type. >> >> Cheer, >> Quan Anh >> >> On Sat, 20 Jan 2024 at 01:34, John Bossons wrote: >> >>> Hi Quan Anh, >>> >>> We're talking past each other. >>> >>> ME: But in Java idiom that means that a developer can invoke the public >>> implicit constructor, which will cause confusion. >>> YOU: An implicit constructor can be invoked like any other constructor, >>> and it will return an all-zero instance of the corresponding class. >>> >>> >>> Precisely. Which will often not be valid in the application context. It >>> should be possible for a constructor to exclude an all-zero instance, such >>> as a zero-length Range or an all-null Name (to use two examples in the >>> draft spec) without giving up the ability to specify that it is >>> null-restricted. Or for the 'real' constructor to be private, invoked from >>> a factory method, which is effectively made useless as a protective feature >>> if a public constructor is also provided. (If the implicit constructor >>> could be specified as private, that would take care of the problem. >>> Extending a marker interface is simpler.) >>> >>> ME: > My further suggestion is that appending ! to a type should mean >>> that the default initialized value of an instance (all fields zero) is >>> equivalent to null, so that >>> > Range![] a = new Range![100]; // allocated with zero values >>> > System.out.println(a[5]); // throws NullPointerException >>> (zero fields) >>> > This better conforms to current idiom, where the initial >>> initialization is with nulls and the println invocation on a null array >>> element or field throws a NPE. >>> YOU: What is the value of this proposal? If you want the all-zero >>> instance to be equivalent to null, just do not have any constructor that >>> initializes an instance to that state. The whole point of null-restricted >>> fields/variables is to indicate that the field/variable is always valid. >>> >>> >>> The issue here is not what the constructor does -- it hasn't been >>> invoked yet for the element on which println is invoked -- but rather that >>> the fact that the array element is undefined should be capable of being >>> caught. >>> >>> On Fri, Jan 19, 2024 at 11:48?AM Qu?n Anh Mai wrote: >>> >>>> I forgot to cc valhalla-dev >>>> >>>> ---------- Forwarded message --------- >>>> From: Qu?n Anh Mai >>>> Date: Sat, 20 Jan 2024 at 00:33 >>>> Subject: Re: Null-restricted types: Why so complicated? >>>> To: John Bossons >>>> >>>> >>>> Hi, >>>> >>>> > But in Java idiom that means that a developer can invoke the public >>>> implicit constructor, which will cause confusion. >>>> >>>> An implicit constructor can be invoked like any other constructor, and >>>> it will return an all-zero instance of the corresponding class. >>>> >>>> > My further suggestion is that appending ! to a type should mean that >>>> the default initialized value of an instance (all fields zero) is >>>> equivalent to null, so that >>>> > Range![] a = new Range![100]; // allocated with zero values >>>> > System.out.println(a[5]); // throws NullPointerException >>>> (zero fields) >>>> > This better conforms to current idiom, where the initial >>>> initialization is with nulls and the println invocation on a null array >>>> element or field throws a NPE. >>>> >>>> What is the value of this proposal? If you want the all-zero instance >>>> to be equivalent to null, just do not have any constructor that initializes >>>> an instance to that state. The whole point of null-restricted >>>> fields/variables is to indicate that the field/variable is always valid. >>>> >>>> I think you are having some confusion. Null-restriction is the property >>>> of a variable/field, i.e. the property of the holder, not of the class >>>> itself. The class having implicit constructors simply means that it allows >>>> the existence of null-restricted fields/variables. The class can be used as >>>> normal with non-null-restricted types. (e.g Range r = null;) >>>> >>>> Regards, >>>> Quan Anh >>>> >>>> On Sat, 20 Jan 2024 at 00:09, John Bossons wrote: >>>> >>>>> Thanks for your comments. I was not sufficiently explicit. >>>>> >>>>> Let me focus on implicit. I guess my dislike is of introducing a >>>>> 'fake' constructor into the definition of a class. I say 'fake' because, as >>>>> I understand it, the only purpose of the implicit constructor is to >>>>> indicate to the JVM/compiler that a never-null instance can be created. But >>>>> in Java idiom that means that a developer can invoke the public implicit >>>>> constructor, which will cause confusion. >>>>> >>>>> Maybe it would be better to require a potentially null-restricted >>>>> class to extend a marker interface ('extends NeverNullPossible'? Or maybe, >>>>> looking ahead to my next comment, 'extends AllZerosIsNull'?). That would >>>>> enable the compiler to catch an invalid use of the ! marker in a >>>>> declaration, just as the proposed implicit constructor does, while >>>>> conforming better to common Java idiom. >>>>> >>>>> My further suggestion is that appending ! to a type should mean that >>>>> the default initialized value of an instance (all fields zero) is >>>>> equivalent to null, so that >>>>> Range![] a = new Range![100]; // allocated with zero values >>>>> System.out.println(a[5]); // throws NullPointerException >>>>> (zero fields) >>>>> This better conforms to current idiom, where the initial >>>>> initialization is with nulls and the println invocation on a null array >>>>> element or field throws a NPE. >>>>> >>>>> As you say, my suggestion means runtime testing to determine if all >>>>> fields are zero, which has a performance cost. This will only occur if the >>>>> JVM implements the ! specification, which it presumably will only do if the >>>>> object is small. And the cost will be small (I am presuming) relative to >>>>> savings from allowing the memory footprint to match that of primitives. Am >>>>> I wrong? There is value in conforming to current idiom. >>>>> >>>>> Turning to the LooselyConsistentValue, I withdraw my comments. I >>>>> mistakenly presumed that its use would be required, which is false. It >>>>> simply enables a single-threaded (or volatile-protected) application to >>>>> allow additional inlining, which is harmless. >>>>> >>>>> John >>>>> >>>>> On Thu, Jan 18, 2024 at 4:56?PM - wrote: >>>>> >>>>>> Hi John, >>>>>> >>>>>> On Thu, Jan 18, 2024 at 2:30?PM John Bossons >>>>>> wrote: >>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> Maybe I am missing something, but the proposal seems to be trying to >>>>>>> do too much. >>>>>>> >>>>>>> Specifically: Why not simply provide that appending ! to a type >>>>>>> specification for an object (field, array element, or parameter) means that >>>>>>> that the object is not only null-restricted but also never zero and >>>>>>> necessarily non-atomic unless small? >>>>>>> >>>>>> First, a reminder that some objects cannot be non-atomic, mostly when >>>>>> fields have dependencies/constraints on each other: if you have a range, >>>>>> you cannot allow its lower bound to be larger than its upper bound. >>>>>> Non-atomic representations cannot avoid this pitfall. Also you seem >>>>>> to misunderstand non-atomic: if an object is non-atomic, each of its fields >>>>>> can update independently from each other, so a 3-d position can be >>>>>> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >>>>>> should not be the default. However, if an atomic class is small enough, >>>>>> like OptionalInt (as now many architecture has like atomic handling of 16 >>>>>> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >>>>>> better performance without violating their object constraints. >>>>>> >>>>>>> >>>>>>> Why complicate the specification with an implicit constructor that a >>>>>>> developer will never explicitly invoke? Why permit a developer to 'opt in' >>>>>>> to non-atomic? >>>>>>> >>>>>> The implicit constructor can always be called; its existence asks >>>>>> programmers to affirm that the zero-filled inlined instance is a valid >>>>>> instance. And this instance is different from a null, as null is a pointer, >>>>>> yet the zero-instance has a different size defined by the class layout in >>>>>> the stack/heap. >>>>>> >>>>>>> >>>>>>> Sure, that means trying to read a zero value triggers a NPE. That >>>>>>> just means that a type that can legitimately have a zero value cannot be >>>>>>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>>>>>> is the equivalent of a null unrestricted value object. Why go beyond that? >>>>>>> If a non-null zero value is possible, the type cannot be null-restricted >>>>>>> and so can only be an unrestricted JEP 401 value type. End of story. >>>>>>> >>>>>> You see the inlined zero instance and the null pointer have different >>>>>> sizes, and thus they are not exchangeable. Converting the inlined zero >>>>>> instance to null to throw NPE is complex and hurtful to performance as you >>>>>> will scan unrelated bits for almost every field access. >>>>>> >>>>>> And for unrestricted value type, yes, they exist and can possibly be >>>>>> inlined as well if the restricted type is small enough (i.e. has space for >>>>>> extra bit indicating nullity) But reminder, the nullity bit itself isn't >>>>>> even non-atomic with (depends on) the rest of the object! You don't want >>>>>> the nullity to indicate null while the rest of the object indicate some >>>>>> sort of non-null value, which can happen in a non-atomic context. >>>>>> >>>>>>> >>>>>>> With respect to non-atomic, what is new? Yes, unexpected instances >>>>>>> may occur without synchronization if the object is larger than the word >>>>>>> size of the implementation. Why do we need to extend a >>>>>>> LooselyConsistentValue interface to know/permit that? >>>>>>> >>>>>> Unexpected instances don't occur without synchronization if you use >>>>>> finals, such as in Java's String or immutable List.of(). These APIs may >>>>>> capture any "permitted value" from the arrays passed in, but once >>>>>> constructed, the captured value remains constant no matter which thread >>>>>> observes the String/List object reference. (Technically, JVM implements >>>>>> this with a store-store fence between end of field writes in the >>>>>> constructor and object reference is shared anywhere, and a load-load fence >>>>>> between object reference read and field read) Value classes is about the >>>>>> safety of final fields in programming instead of the close encounter of >>>>>> third kinds of synchronization, volatiles, and fences. >>>>>> >>>>>>> >>>>>>> Can we not keep this 'simple' (if that word has meaning in this >>>>>>> context)? What am I missing? >>>>>>> >>>>>> I think you are missing a bit about how the layout (inlining is >>>>>> represented in memory) and value classes (the thread safety its final >>>>>> offers) work, and what "non-atomic" means. Feel free to question more. >>>>>> >>>>>>> >>>>>>> John >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Phone: (416) 450-3584 (cell) >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> Phone: (416) 450-3584 (cell) >>>>> >>>> >>> >>> -- >>> Phone: (416) 450-3584 (cell) >>> >> > > -- > Phone: (416) 450-3584 (cell) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbossons at gmail.com Fri Jan 19 18:28:00 2024 From: jbossons at gmail.com (John Bossons) Date: Fri, 19 Jan 2024 13:28:00 -0500 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Because the instance hasn't been defined yet. The array has been defined, but not its elements. On Fri, Jan 19, 2024 at 1:23?PM Qu?n Anh Mai wrote: > Hi, > > I still do not understand your question. By declaring an implicit > constructor, you are indicating that the all-zero instance is usable, and > by using Range![], you are telling Java that the all-zero instance is > usable in this particular context. Why does the JVM need to protect you > from invoking methods using an instance that you are specifically > indicating as valid? > > Cheer, > Quan Anh > > On Sat, 20 Jan 2024 at 02:05, John Bossons wrote: > >> Hi again, >> >> What you suggest is one solution. My counter: It's unsafe. The JVM should >> be able to protect a developer from invoking doSomethingWith(a[5]) on an >> undefined array element (or method parameter). >> >> On Fri, Jan 19, 2024 at 12:43?PM Qu?n Anh Mai wrote: >> >>> Why not just >>> >>> Range[] a = new Range[100]; // allocate with null values >>> System.out.println(a[5]); // NullPointerException >>> >>> By using Range![] you accept that a zero value is acceptable, similar to >>> how an int[] works. If you do not want the uninitialized values to be >>> usable then do not use null-restricted type. >>> >>> Cheer, >>> Quan Anh >>> >>> On Sat, 20 Jan 2024 at 01:34, John Bossons wrote: >>> >>>> Hi Quan Anh, >>>> >>>> We're talking past each other. >>>> >>>> ME: But in Java idiom that means that a developer can invoke the >>>> public implicit constructor, which will cause confusion. >>>> YOU: An implicit constructor can be invoked like any other >>>> constructor, and it will return an all-zero instance of the corresponding >>>> class. >>>> >>>> >>>> Precisely. Which will often not be valid in the application context. It >>>> should be possible for a constructor to exclude an all-zero instance, such >>>> as a zero-length Range or an all-null Name (to use two examples in the >>>> draft spec) without giving up the ability to specify that it is >>>> null-restricted. Or for the 'real' constructor to be private, invoked from >>>> a factory method, which is effectively made useless as a protective feature >>>> if a public constructor is also provided. (If the implicit constructor >>>> could be specified as private, that would take care of the problem. >>>> Extending a marker interface is simpler.) >>>> >>>> ME: > My further suggestion is that appending ! to a type should mean >>>> that the default initialized value of an instance (all fields zero) is >>>> equivalent to null, so that >>>> > Range![] a = new Range![100]; // allocated with zero values >>>> > System.out.println(a[5]); // throws NullPointerException >>>> (zero fields) >>>> > This better conforms to current idiom, where the initial >>>> initialization is with nulls and the println invocation on a null array >>>> element or field throws a NPE. >>>> YOU: What is the value of this proposal? If you want the all-zero >>>> instance to be equivalent to null, just do not have any constructor that >>>> initializes an instance to that state. The whole point of null-restricted >>>> fields/variables is to indicate that the field/variable is always valid. >>>> >>>> >>>> The issue here is not what the constructor does -- it hasn't been >>>> invoked yet for the element on which println is invoked -- but rather that >>>> the fact that the array element is undefined should be capable of being >>>> caught. >>>> >>>> On Fri, Jan 19, 2024 at 11:48?AM Qu?n Anh Mai wrote: >>>> >>>>> I forgot to cc valhalla-dev >>>>> >>>>> ---------- Forwarded message --------- >>>>> From: Qu?n Anh Mai >>>>> Date: Sat, 20 Jan 2024 at 00:33 >>>>> Subject: Re: Null-restricted types: Why so complicated? >>>>> To: John Bossons >>>>> >>>>> >>>>> Hi, >>>>> >>>>> > But in Java idiom that means that a developer can invoke the public >>>>> implicit constructor, which will cause confusion. >>>>> >>>>> An implicit constructor can be invoked like any other constructor, and >>>>> it will return an all-zero instance of the corresponding class. >>>>> >>>>> > My further suggestion is that appending ! to a type should mean that >>>>> the default initialized value of an instance (all fields zero) is >>>>> equivalent to null, so that >>>>> > Range![] a = new Range![100]; // allocated with zero values >>>>> > System.out.println(a[5]); // throws NullPointerException >>>>> (zero fields) >>>>> > This better conforms to current idiom, where the initial >>>>> initialization is with nulls and the println invocation on a null array >>>>> element or field throws a NPE. >>>>> >>>>> What is the value of this proposal? If you want the all-zero instance >>>>> to be equivalent to null, just do not have any constructor that initializes >>>>> an instance to that state. The whole point of null-restricted >>>>> fields/variables is to indicate that the field/variable is always valid. >>>>> >>>>> I think you are having some confusion. Null-restriction is the >>>>> property of a variable/field, i.e. the property of the holder, not of the >>>>> class itself. The class having implicit constructors simply means that it >>>>> allows the existence of null-restricted fields/variables. The class can be >>>>> used as normal with non-null-restricted types. (e.g Range r = null;) >>>>> >>>>> Regards, >>>>> Quan Anh >>>>> >>>>> On Sat, 20 Jan 2024 at 00:09, John Bossons wrote: >>>>> >>>>>> Thanks for your comments. I was not sufficiently explicit. >>>>>> >>>>>> Let me focus on implicit. I guess my dislike is of introducing a >>>>>> 'fake' constructor into the definition of a class. I say 'fake' because, as >>>>>> I understand it, the only purpose of the implicit constructor is to >>>>>> indicate to the JVM/compiler that a never-null instance can be created. But >>>>>> in Java idiom that means that a developer can invoke the public implicit >>>>>> constructor, which will cause confusion. >>>>>> >>>>>> Maybe it would be better to require a potentially null-restricted >>>>>> class to extend a marker interface ('extends NeverNullPossible'? Or maybe, >>>>>> looking ahead to my next comment, 'extends AllZerosIsNull'?). That would >>>>>> enable the compiler to catch an invalid use of the ! marker in a >>>>>> declaration, just as the proposed implicit constructor does, while >>>>>> conforming better to common Java idiom. >>>>>> >>>>>> My further suggestion is that appending ! to a type should mean that >>>>>> the default initialized value of an instance (all fields zero) is >>>>>> equivalent to null, so that >>>>>> Range![] a = new Range![100]; // allocated with zero values >>>>>> System.out.println(a[5]); // throws NullPointerException >>>>>> (zero fields) >>>>>> This better conforms to current idiom, where the initial >>>>>> initialization is with nulls and the println invocation on a null array >>>>>> element or field throws a NPE. >>>>>> >>>>>> As you say, my suggestion means runtime testing to determine if all >>>>>> fields are zero, which has a performance cost. This will only occur if the >>>>>> JVM implements the ! specification, which it presumably will only do if the >>>>>> object is small. And the cost will be small (I am presuming) relative to >>>>>> savings from allowing the memory footprint to match that of primitives. Am >>>>>> I wrong? There is value in conforming to current idiom. >>>>>> >>>>>> Turning to the LooselyConsistentValue, I withdraw my comments. I >>>>>> mistakenly presumed that its use would be required, which is false. It >>>>>> simply enables a single-threaded (or volatile-protected) application to >>>>>> allow additional inlining, which is harmless. >>>>>> >>>>>> John >>>>>> >>>>>> On Thu, Jan 18, 2024 at 4:56?PM - wrote: >>>>>> >>>>>>> Hi John, >>>>>>> >>>>>>> On Thu, Jan 18, 2024 at 2:30?PM John Bossons >>>>>>> wrote: >>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> Maybe I am missing something, but the proposal seems to be trying >>>>>>>> to do too much. >>>>>>>> >>>>>>>> Specifically: Why not simply provide that appending ! to a type >>>>>>>> specification for an object (field, array element, or parameter) means that >>>>>>>> that the object is not only null-restricted but also never zero and >>>>>>>> necessarily non-atomic unless small? >>>>>>>> >>>>>>> First, a reminder that some objects cannot be non-atomic, mostly >>>>>>> when fields have dependencies/constraints on each other: if you have a >>>>>>> range, you cannot allow its lower bound to be larger than its upper bound. >>>>>>> Non-atomic representations cannot avoid this pitfall. Also you seem >>>>>>> to misunderstand non-atomic: if an object is non-atomic, each of its fields >>>>>>> can update independently from each other, so a 3-d position can be >>>>>>> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >>>>>>> should not be the default. However, if an atomic class is small enough, >>>>>>> like OptionalInt (as now many architecture has like atomic handling of 16 >>>>>>> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >>>>>>> better performance without violating their object constraints. >>>>>>> >>>>>>>> >>>>>>>> Why complicate the specification with an implicit constructor that >>>>>>>> a developer will never explicitly invoke? Why permit a developer to 'opt >>>>>>>> in' to non-atomic? >>>>>>>> >>>>>>> The implicit constructor can always be called; its existence asks >>>>>>> programmers to affirm that the zero-filled inlined instance is a valid >>>>>>> instance. And this instance is different from a null, as null is a pointer, >>>>>>> yet the zero-instance has a different size defined by the class layout in >>>>>>> the stack/heap. >>>>>>> >>>>>>>> >>>>>>>> Sure, that means trying to read a zero value triggers a NPE. That >>>>>>>> just means that a type that can legitimately have a zero value cannot be >>>>>>>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>>>>>>> is the equivalent of a null unrestricted value object. Why go beyond that? >>>>>>>> If a non-null zero value is possible, the type cannot be null-restricted >>>>>>>> and so can only be an unrestricted JEP 401 value type. End of story. >>>>>>>> >>>>>>> You see the inlined zero instance and the null pointer have >>>>>>> different sizes, and thus they are not exchangeable. Converting the inlined >>>>>>> zero instance to null to throw NPE is complex and hurtful to performance as >>>>>>> you will scan unrelated bits for almost every field access. >>>>>>> >>>>>>> And for unrestricted value type, yes, they exist and can possibly be >>>>>>> inlined as well if the restricted type is small enough (i.e. has space for >>>>>>> extra bit indicating nullity) But reminder, the nullity bit itself isn't >>>>>>> even non-atomic with (depends on) the rest of the object! You don't want >>>>>>> the nullity to indicate null while the rest of the object indicate some >>>>>>> sort of non-null value, which can happen in a non-atomic context. >>>>>>> >>>>>>>> >>>>>>>> With respect to non-atomic, what is new? Yes, unexpected instances >>>>>>>> may occur without synchronization if the object is larger than the word >>>>>>>> size of the implementation. Why do we need to extend a >>>>>>>> LooselyConsistentValue interface to know/permit that? >>>>>>>> >>>>>>> Unexpected instances don't occur without synchronization if you use >>>>>>> finals, such as in Java's String or immutable List.of(). These APIs may >>>>>>> capture any "permitted value" from the arrays passed in, but once >>>>>>> constructed, the captured value remains constant no matter which thread >>>>>>> observes the String/List object reference. (Technically, JVM implements >>>>>>> this with a store-store fence between end of field writes in the >>>>>>> constructor and object reference is shared anywhere, and a load-load fence >>>>>>> between object reference read and field read) Value classes is about the >>>>>>> safety of final fields in programming instead of the close encounter of >>>>>>> third kinds of synchronization, volatiles, and fences. >>>>>>> >>>>>>>> >>>>>>>> Can we not keep this 'simple' (if that word has meaning in this >>>>>>>> context)? What am I missing? >>>>>>>> >>>>>>> I think you are missing a bit about how the layout (inlining is >>>>>>> represented in memory) and value classes (the thread safety its final >>>>>>> offers) work, and what "non-atomic" means. Feel free to question more. >>>>>>> >>>>>>>> >>>>>>>> John >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Phone: (416) 450-3584 (cell) >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Phone: (416) 450-3584 (cell) >>>>>> >>>>> >>>> >>>> -- >>>> Phone: (416) 450-3584 (cell) >>>> >>> >> >> -- >> Phone: (416) 450-3584 (cell) >> > -- Phone: (416) 450-3584 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From liangchenblue at gmail.com Fri Jan 19 18:28:12 2024 From: liangchenblue at gmail.com (-) Date: Fri, 19 Jan 2024 12:28:12 -0600 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Hi John, As Qu?n said, you should only declare your array with ! if you know the zero instance is safe. Otherwise, don't use the ! null-restriction declaration, which will still fill the array with nulls and throw NPE on an undefined array element. It is still possible technically for Valhalla to inline any value type into an array, null-restricted or not, except non-atomicity and implicit construction makes inlining easier (given the hardware restraints), and correctness should always come before performance in your designs. On Fri, Jan 19, 2024 at 12:06?PM John Bossons wrote: > Hi again, > > What you suggest is one solution. My counter: It's unsafe. The JVM should > be able to protect a developer from invoking doSomethingWith(a[5]) on an > undefined array element (or method parameter). > > On Fri, Jan 19, 2024 at 12:43?PM Qu?n Anh Mai wrote: > >> Why not just >> >> Range[] a = new Range[100]; // allocate with null values >> System.out.println(a[5]); // NullPointerException >> >> By using Range![] you accept that a zero value is acceptable, similar to >> how an int[] works. If you do not want the uninitialized values to be >> usable then do not use null-restricted type. >> >> Cheer, >> Quan Anh >> >> On Sat, 20 Jan 2024 at 01:34, John Bossons wrote: >> >>> Hi Quan Anh, >>> >>> We're talking past each other. >>> >>> ME: But in Java idiom that means that a developer can invoke the public >>> implicit constructor, which will cause confusion. >>> YOU: An implicit constructor can be invoked like any other constructor, >>> and it will return an all-zero instance of the corresponding class. >>> >>> >>> Precisely. Which will often not be valid in the application context. It >>> should be possible for a constructor to exclude an all-zero instance, such >>> as a zero-length Range or an all-null Name (to use two examples in the >>> draft spec) without giving up the ability to specify that it is >>> null-restricted. Or for the 'real' constructor to be private, invoked from >>> a factory method, which is effectively made useless as a protective feature >>> if a public constructor is also provided. (If the implicit constructor >>> could be specified as private, that would take care of the problem. >>> Extending a marker interface is simpler.) >>> >>> ME: > My further suggestion is that appending ! to a type should mean >>> that the default initialized value of an instance (all fields zero) is >>> equivalent to null, so that >>> > Range![] a = new Range![100]; // allocated with zero values >>> > System.out.println(a[5]); // throws NullPointerException >>> (zero fields) >>> > This better conforms to current idiom, where the initial >>> initialization is with nulls and the println invocation on a null array >>> element or field throws a NPE. >>> YOU: What is the value of this proposal? If you want the all-zero >>> instance to be equivalent to null, just do not have any constructor that >>> initializes an instance to that state. The whole point of null-restricted >>> fields/variables is to indicate that the field/variable is always valid. >>> >>> >>> The issue here is not what the constructor does -- it hasn't been >>> invoked yet for the element on which println is invoked -- but rather that >>> the fact that the array element is undefined should be capable of being >>> caught. >>> >>> On Fri, Jan 19, 2024 at 11:48?AM Qu?n Anh Mai wrote: >>> >>>> I forgot to cc valhalla-dev >>>> >>>> ---------- Forwarded message --------- >>>> From: Qu?n Anh Mai >>>> Date: Sat, 20 Jan 2024 at 00:33 >>>> Subject: Re: Null-restricted types: Why so complicated? >>>> To: John Bossons >>>> >>>> >>>> Hi, >>>> >>>> > But in Java idiom that means that a developer can invoke the public >>>> implicit constructor, which will cause confusion. >>>> >>>> An implicit constructor can be invoked like any other constructor, and >>>> it will return an all-zero instance of the corresponding class. >>>> >>>> > My further suggestion is that appending ! to a type should mean that >>>> the default initialized value of an instance (all fields zero) is >>>> equivalent to null, so that >>>> > Range![] a = new Range![100]; // allocated with zero values >>>> > System.out.println(a[5]); // throws NullPointerException >>>> (zero fields) >>>> > This better conforms to current idiom, where the initial >>>> initialization is with nulls and the println invocation on a null array >>>> element or field throws a NPE. >>>> >>>> What is the value of this proposal? If you want the all-zero instance >>>> to be equivalent to null, just do not have any constructor that initializes >>>> an instance to that state. The whole point of null-restricted >>>> fields/variables is to indicate that the field/variable is always valid. >>>> >>>> I think you are having some confusion. Null-restriction is the property >>>> of a variable/field, i.e. the property of the holder, not of the class >>>> itself. The class having implicit constructors simply means that it allows >>>> the existence of null-restricted fields/variables. The class can be used as >>>> normal with non-null-restricted types. (e.g Range r = null;) >>>> >>>> Regards, >>>> Quan Anh >>>> >>>> On Sat, 20 Jan 2024 at 00:09, John Bossons wrote: >>>> >>>>> Thanks for your comments. I was not sufficiently explicit. >>>>> >>>>> Let me focus on implicit. I guess my dislike is of introducing a >>>>> 'fake' constructor into the definition of a class. I say 'fake' because, as >>>>> I understand it, the only purpose of the implicit constructor is to >>>>> indicate to the JVM/compiler that a never-null instance can be created. But >>>>> in Java idiom that means that a developer can invoke the public implicit >>>>> constructor, which will cause confusion. >>>>> >>>>> Maybe it would be better to require a potentially null-restricted >>>>> class to extend a marker interface ('extends NeverNullPossible'? Or maybe, >>>>> looking ahead to my next comment, 'extends AllZerosIsNull'?). That would >>>>> enable the compiler to catch an invalid use of the ! marker in a >>>>> declaration, just as the proposed implicit constructor does, while >>>>> conforming better to common Java idiom. >>>>> >>>>> My further suggestion is that appending ! to a type should mean that >>>>> the default initialized value of an instance (all fields zero) is >>>>> equivalent to null, so that >>>>> Range![] a = new Range![100]; // allocated with zero values >>>>> System.out.println(a[5]); // throws NullPointerException >>>>> (zero fields) >>>>> This better conforms to current idiom, where the initial >>>>> initialization is with nulls and the println invocation on a null array >>>>> element or field throws a NPE. >>>>> >>>>> As you say, my suggestion means runtime testing to determine if all >>>>> fields are zero, which has a performance cost. This will only occur if the >>>>> JVM implements the ! specification, which it presumably will only do if the >>>>> object is small. And the cost will be small (I am presuming) relative to >>>>> savings from allowing the memory footprint to match that of primitives. Am >>>>> I wrong? There is value in conforming to current idiom. >>>>> >>>>> Turning to the LooselyConsistentValue, I withdraw my comments. I >>>>> mistakenly presumed that its use would be required, which is false. It >>>>> simply enables a single-threaded (or volatile-protected) application to >>>>> allow additional inlining, which is harmless. >>>>> >>>>> John >>>>> >>>>> On Thu, Jan 18, 2024 at 4:56?PM - wrote: >>>>> >>>>>> Hi John, >>>>>> >>>>>> On Thu, Jan 18, 2024 at 2:30?PM John Bossons >>>>>> wrote: >>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> Maybe I am missing something, but the proposal seems to be trying to >>>>>>> do too much. >>>>>>> >>>>>>> Specifically: Why not simply provide that appending ! to a type >>>>>>> specification for an object (field, array element, or parameter) means that >>>>>>> that the object is not only null-restricted but also never zero and >>>>>>> necessarily non-atomic unless small? >>>>>>> >>>>>> First, a reminder that some objects cannot be non-atomic, mostly when >>>>>> fields have dependencies/constraints on each other: if you have a range, >>>>>> you cannot allow its lower bound to be larger than its upper bound. >>>>>> Non-atomic representations cannot avoid this pitfall. Also you seem >>>>>> to misunderstand non-atomic: if an object is non-atomic, each of its fields >>>>>> can update independently from each other, so a 3-d position can be >>>>>> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >>>>>> should not be the default. However, if an atomic class is small enough, >>>>>> like OptionalInt (as now many architecture has like atomic handling of 16 >>>>>> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >>>>>> better performance without violating their object constraints. >>>>>> >>>>>>> >>>>>>> Why complicate the specification with an implicit constructor that a >>>>>>> developer will never explicitly invoke? Why permit a developer to 'opt in' >>>>>>> to non-atomic? >>>>>>> >>>>>> The implicit constructor can always be called; its existence asks >>>>>> programmers to affirm that the zero-filled inlined instance is a valid >>>>>> instance. And this instance is different from a null, as null is a pointer, >>>>>> yet the zero-instance has a different size defined by the class layout in >>>>>> the stack/heap. >>>>>> >>>>>>> >>>>>>> Sure, that means trying to read a zero value triggers a NPE. That >>>>>>> just means that a type that can legitimately have a zero value cannot be >>>>>>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>>>>>> is the equivalent of a null unrestricted value object. Why go beyond that? >>>>>>> If a non-null zero value is possible, the type cannot be null-restricted >>>>>>> and so can only be an unrestricted JEP 401 value type. End of story. >>>>>>> >>>>>> You see the inlined zero instance and the null pointer have different >>>>>> sizes, and thus they are not exchangeable. Converting the inlined zero >>>>>> instance to null to throw NPE is complex and hurtful to performance as you >>>>>> will scan unrelated bits for almost every field access. >>>>>> >>>>>> And for unrestricted value type, yes, they exist and can possibly be >>>>>> inlined as well if the restricted type is small enough (i.e. has space for >>>>>> extra bit indicating nullity) But reminder, the nullity bit itself isn't >>>>>> even non-atomic with (depends on) the rest of the object! You don't want >>>>>> the nullity to indicate null while the rest of the object indicate some >>>>>> sort of non-null value, which can happen in a non-atomic context. >>>>>> >>>>>>> >>>>>>> With respect to non-atomic, what is new? Yes, unexpected instances >>>>>>> may occur without synchronization if the object is larger than the word >>>>>>> size of the implementation. Why do we need to extend a >>>>>>> LooselyConsistentValue interface to know/permit that? >>>>>>> >>>>>> Unexpected instances don't occur without synchronization if you use >>>>>> finals, such as in Java's String or immutable List.of(). These APIs may >>>>>> capture any "permitted value" from the arrays passed in, but once >>>>>> constructed, the captured value remains constant no matter which thread >>>>>> observes the String/List object reference. (Technically, JVM implements >>>>>> this with a store-store fence between end of field writes in the >>>>>> constructor and object reference is shared anywhere, and a load-load fence >>>>>> between object reference read and field read) Value classes is about the >>>>>> safety of final fields in programming instead of the close encounter of >>>>>> third kinds of synchronization, volatiles, and fences. >>>>>> >>>>>>> >>>>>>> Can we not keep this 'simple' (if that word has meaning in this >>>>>>> context)? What am I missing? >>>>>>> >>>>>> I think you are missing a bit about how the layout (inlining is >>>>>> represented in memory) and value classes (the thread safety its final >>>>>> offers) work, and what "non-atomic" means. Feel free to question more. >>>>>> >>>>>>> >>>>>>> John >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Phone: (416) 450-3584 (cell) >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> Phone: (416) 450-3584 (cell) >>>>> >>>> >>> >>> -- >>> Phone: (416) 450-3584 (cell) >>> >> > > -- > Phone: (416) 450-3584 (cell) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maurizio.cimadamore at oracle.com Fri Jan 19 19:02:29 2024 From: maurizio.cimadamore at oracle.com (Maurizio Cimadamore) Date: Fri, 19 Jan 2024 19:02:29 +0000 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: <42016d8e-edca-49b4-b3ba-86ae1e32e1a3@oracle.com> Hi John On 18/01/2024 19:38, John Bossons wrote: > Why complicate the specification with an implicit constructor that a > developer will never explicitly invoke? Why permit a developer to 'opt > in' to non-atomic? I believe you are coming at this from the wrong angle. The implicit constructor is, at its core, a mechanism for the class to advertise that it has a zero instance (as the JEP calls it [1]). This means that whenever a default value of the class has to be materialized it?s as if the impliict constructor was called. This are some of the ways this can happen (there?s probably more): * when you allocate arrays * when you create a |MethodHandle| which returns the zero value of a given class (see |MethodHandles::zero|) * if we ever provide a syntax to say |Foo.default|/|Foo.zero| In other words, once a value class has an associated zero-instance, developers can materialize that zero-instance in many different ways. Because of that, the point you raise that implicit constructors make you declare a constructor that you don?t want your clients to invoke is a bit off the mark, because clients can maetrialize zero-instances anyway. Actually, advertising it as an API point makes it quite clear: the maintainer of that value class has to be prepared to handle cases where all fields are zeroed - that?s part of its API contract. Zooming back a bit, you seem to favor a model where the cost is pushed to every read of an array element, or flattenable field. Reads are (way) more common that writes, which is why the current proposal only lets you build array types (and flattenable fields) which make sense given the value type under consideration. If you want to build a non-null container, fine, but the type of the container must better support the ?all zero? configuration, since that?s the paintbrush the JVM will use to paint the container bits. This means that, array creation of a null-restricted value is the same as for a regular identity class (all bits zeros) and, since the value class explicitly declared its support for the zero-instance configuration, no check is required on reads. And, zooming back even more (now we?re up in the clouds :-) ), the model we?re trying to achieve here is one where you get more (flattening) guarantees, by progressively giving up other guarantees. For example: * opt-out of identity (use |value|) -> enables better flattening/scalarization (on the stack), allow flattening (but with some extra bits to handle nulls) o opt-out of zero-instance protection (use implicit constructors) -> removes the extra null state from the flattened storage o opt-out of nullability (use null-restricted types) -> allows flattening of fields/array elements o opt-out of atomic updates (use non-atomic) -> allows flattening of types that are bigger than N (where N defined by the JVM) You can see this as a series of step that takes you, smoothly, from an identity class such as |java.util.ArrayList|, down to something simpler and simpler, until you get to |int|, a primitive type (which, not coincidentally, has given up exactly the same guarantees listed above!). That is, maximum flattening (e.g. int-like) is not a simple on/off switch, but, rather, a property that emerges when the treatment of the value being flattened can be relaxed along /all/ the axis shown above. This model is described, in much better words than the ones I used here, in Brian?s document [2]. Maurizio [1] - https://openjdk.org/jeps/8316779 [2] - https://github.com/openjdk/valhalla-docs/blob/main/site/design-notes/state-of-valhalla/02-object-model.md ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbossons at gmail.com Fri Jan 19 19:21:21 2024 From: jbossons at gmail.com (John Bossons) Date: Fri, 19 Jan 2024 14:21:21 -0500 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Hi Liang Chen, I totally agree with your earlier comment re more efficient checking. The big win comes from value classes. Null-restricted classes are a lesser but useful additional win. But I remain concerned about catching an undefined element of a Long![] array. Maybe we just have to wait for 128-bit word sizes, so we don't need null restriction for boxed primitives! I think you are convincing me that any array of 'large' null-restricted elements is unsafe. Maybe that's the answer -- Leave null-restricted types as defined, but don't implement arrays of null-restricted elements larger than ints! Or, to put this in more acceptable terms, let the JVM make that decision. John PS: I still maintain that use of a public implicit constructor to indicate null-restriction is not a good choice, because of the constraints it imposes on developers. See Effective Java, Item 1, for the design pattern it renders ineffective. A marker interface (analogous to Serializable or Iterable) would avoid this effect. On Fri, Jan 19, 2024 at 12:52?PM - wrote: > > > On Fri, Jan 19, 2024 at 10:07?AM John Bossons wrote: > >> Thanks for your comments. I was not sufficiently explicit. >> >> Let me focus on implicit. I guess my dislike is of introducing a 'fake' >> constructor into the definition of a class. I say 'fake' because, as I >> understand it, the only purpose of the implicit constructor is to indicate >> to the JVM/compiler that a never-null instance can be created. But in Java >> idiom that means that a developer can invoke the public implicit >> constructor, which will cause confusion. >> > It is a real constructor like that default no-arg one generated by javac, > and developers always CAN invoke that constructor. It is, however, a > watered-down version, because our existing default one can perform side > effects like `private List values = new ArrayList<>();` injected > to the end of constructor, while the implicit one must give it up so that > JVM can construct zero instances cheaply yet correctly. As a result, this > constructor cannot declare any side-effect code or declare a custom > superconstrctor call, so it will look like a "fake" one, yet it is no > different from a true constructor. > >> >> Maybe it would be better to require a potentially null-restricted class >> to extend a marker interface ('extends NeverNullPossible'? Or maybe, >> looking ahead to my next comment, 'extends AllZerosIsNull'?). That would >> enable the compiler to catch an invalid use of the ! marker in a >> declaration, just as the proposed implicit constructor does, while >> conforming better to common Java idiom. >> > A zero instance is a class capacity indeed, and such classes must be > final. My only cent against marker interfaces is that I don't think Java > compiler ever emits errors simply because your class implements an > unsuitable interface. > >> >> My further suggestion is that appending ! to a type should mean that the >> default initialized value of an instance (all fields zero) is equivalent to >> null, so that >> Range![] a = new Range![100]; // allocated with zero values >> System.out.println(a[5]); // throws NullPointerException (zero >> fields) >> This better conforms to current idiom, where the initial initialization >> is with nulls and the println invocation on a null array element or field >> throws a NPE. >> > Consider this: how would you differentiate a null Range versus a Range[0, > 0]? Are both all zero bits? > This is where the zero instance starts: before anything, the zero instance > has always been a VALID instance of an object, yet its inlined > representation will be all zero bits, which means it will coincide with > null; thus, we introduce the null-restricted concept to avoid the > performance pitfalls we will suffer to represent a null. > > Also adding on to Anh Mai's comment, recall that Range is a value class (a > prerequisite to null-restriction) so its identity doesn't matter; the VM is > totally permitted inline the null-friendly range array with 9-byte units (8 > byte + single bit indicating nullity), and it is still somewhat a memory > win over linking to regular objects. But we might need some fine-grained > control to ensure VM allocates an inlined array instead of a pointer array > in this case. In this case, testing null and throwing NPE would be simply > checking one bit, which is more reliable than scanning a whole byte > interval too. > >> >> As you say, my suggestion means runtime testing to determine if all >> fields are zero, which has a performance cost. This will only occur if the >> JVM implements the ! specification, which it presumably will only do if the >> object is small. And the cost will be small (I am presuming) relative to >> savings from allowing the memory footprint to match that of primitives. Am >> I wrong? There is value in conforming to current idiom. >> >> Turning to the LooselyConsistentValue, I withdraw my comments. I >> mistakenly presumed that its use would be required, which is false. It >> simply enables a single-threaded (or volatile-protected) application to >> allow additional inlining, which is harmless. >> > >> John >> >> On Thu, Jan 18, 2024 at 4:56?PM - wrote: >> >>> Hi John, >>> >>> On Thu, Jan 18, 2024 at 2:30?PM John Bossons wrote: >>> >>>> Hi all, >>>> >>>> Maybe I am missing something, but the proposal seems to be trying to do >>>> too much. >>>> >>>> Specifically: Why not simply provide that appending ! to a type >>>> specification for an object (field, array element, or parameter) means that >>>> that the object is not only null-restricted but also never zero and >>>> necessarily non-atomic unless small? >>>> >>> First, a reminder that some objects cannot be non-atomic, mostly when >>> fields have dependencies/constraints on each other: if you have a range, >>> you cannot allow its lower bound to be larger than its upper bound. >>> Non-atomic representations cannot avoid this pitfall. Also you seem >>> to misunderstand non-atomic: if an object is non-atomic, each of its fields >>> can update independently from each other, so a 3-d position can be >>> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >>> should not be the default. However, if an atomic class is small enough, >>> like OptionalInt (as now many architecture has like atomic handling of 16 >>> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >>> better performance without violating their object constraints. >>> >>>> >>>> Why complicate the specification with an implicit constructor that a >>>> developer will never explicitly invoke? Why permit a developer to 'opt in' >>>> to non-atomic? >>>> >>> The implicit constructor can always be called; its existence asks >>> programmers to affirm that the zero-filled inlined instance is a valid >>> instance. And this instance is different from a null, as null is a pointer, >>> yet the zero-instance has a different size defined by the class layout in >>> the stack/heap. >>> >>>> >>>> Sure, that means trying to read a zero value triggers a NPE. That just >>>> means that a type that can legitimately have a zero value cannot be >>>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>>> is the equivalent of a null unrestricted value object. Why go beyond that? >>>> If a non-null zero value is possible, the type cannot be null-restricted >>>> and so can only be an unrestricted JEP 401 value type. End of story. >>>> >>> You see the inlined zero instance and the null pointer have different >>> sizes, and thus they are not exchangeable. Converting the inlined zero >>> instance to null to throw NPE is complex and hurtful to performance as you >>> will scan unrelated bits for almost every field access. >>> >>> And for unrestricted value type, yes, they exist and can possibly be >>> inlined as well if the restricted type is small enough (i.e. has space for >>> extra bit indicating nullity) But reminder, the nullity bit itself isn't >>> even non-atomic with (depends on) the rest of the object! You don't want >>> the nullity to indicate null while the rest of the object indicate some >>> sort of non-null value, which can happen in a non-atomic context. >>> >>>> >>>> With respect to non-atomic, what is new? Yes, unexpected instances may >>>> occur without synchronization if the object is larger than the word size of >>>> the implementation. Why do we need to extend a LooselyConsistentValue >>>> interface to know/permit that? >>>> >>> Unexpected instances don't occur without synchronization if you use >>> finals, such as in Java's String or immutable List.of(). These APIs may >>> capture any "permitted value" from the arrays passed in, but once >>> constructed, the captured value remains constant no matter which thread >>> observes the String/List object reference. (Technically, JVM implements >>> this with a store-store fence between end of field writes in the >>> constructor and object reference is shared anywhere, and a load-load fence >>> between object reference read and field read) Value classes is about the >>> safety of final fields in programming instead of the close encounter of >>> third kinds of synchronization, volatiles, and fences. >>> >>>> >>>> Can we not keep this 'simple' (if that word has meaning in this >>>> context)? What am I missing? >>>> >>> I think you are missing a bit about how the layout (inlining is >>> represented in memory) and value classes (the thread safety its final >>> offers) work, and what "non-atomic" means. Feel free to question more. >>> >>>> >>>> John >>>> >>>> >>>> -- >>>> Phone: (416) 450-3584 (cell) >>>> >>> >> >> -- >> Phone: (416) 450-3584 (cell) >> > -- Phone: (416) 450-3584 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbossons at gmail.com Fri Jan 19 19:34:06 2024 From: jbossons at gmail.com (John Bossons) Date: Fri, 19 Jan 2024 14:34:06 -0500 Subject: Null-restricted types: Why so complicated? In-Reply-To: <42016d8e-edca-49b4-b3ba-86ae1e32e1a3@oracle.com> References: <42016d8e-edca-49b4-b3ba-86ae1e32e1a3@oracle.com> Message-ID: Hi Maurizio, Thanks for your comment and the reference to Brian's 2021 document. You're right about the series of steps, I get that. My comments on undefined array elements really come down to saying that it's bad practice to specify arrays of null-restricted elements except where you're sure undefined elements are never read before being defined. Sort of like saying it's bad practice to implement LooselyConsistentValue without protecting against race conditions. As to the use of implicit to advertise that a class has a zero instance, see my last note -- I think there's a better (less disruptive) way of doing that, namely by implementing a marker interface. On Fri, Jan 19, 2024 at 2:02?PM Maurizio Cimadamore < maurizio.cimadamore at oracle.com> wrote: > Hi John > > On 18/01/2024 19:38, John Bossons wrote: > > Why complicate the specification with an implicit constructor that a > developer will never explicitly invoke? Why permit a developer to 'opt in' > to non-atomic? > > I believe you are coming at this from the wrong angle. The implicit > constructor is, at its core, a mechanism for the class to advertise that it > has a zero instance (as the JEP calls it [1]). > > This means that whenever a default value of the class has to be > materialized it?s as if the impliict constructor was called. > > This are some of the ways this can happen (there?s probably more): > > - when you allocate arrays > - when you create a MethodHandle which returns the zero value of a > given class (see MethodHandles::zero) > - if we ever provide a syntax to say Foo.default/Foo.zero > > In other words, once a value class has an associated zero-instance, > developers can materialize that zero-instance in many different ways. > Because of that, the point you raise that implicit constructors make you > declare a constructor that you don?t want your clients to invoke is a bit > off the mark, because clients can maetrialize zero-instances anyway. > Actually, advertising it as an API point makes it quite clear: the > maintainer of that value class has to be prepared to handle cases where all > fields are zeroed - that?s part of its API contract. > > Zooming back a bit, you seem to favor a model where the cost is pushed to > every read of an array element, or flattenable field. Reads are (way) more > common that writes, which is why the current proposal only lets you build > array types (and flattenable fields) which make sense given the value type > under consideration. If you want to build a non-null container, fine, but > the type of the container must better support the ?all zero? configuration, > since that?s the paintbrush the JVM will use to paint the container bits. > This means that, array creation of a null-restricted value is the same as > for a regular identity class (all bits zeros) and, since the value class > explicitly declared its support for the zero-instance configuration, no > check is required on reads. > > And, zooming back even more (now we?re up in the clouds :-) ), the model > we?re trying to achieve here is one where you get more (flattening) > guarantees, by progressively giving up other guarantees. For example: > > - > > opt-out of identity (use value) -> enables better > flattening/scalarization (on the stack), allow flattening (but with some > extra bits to handle nulls) > - opt-out of zero-instance protection (use implicit constructors) -> > removes the extra null state from the flattened storage > - opt-out of nullability (use null-restricted types) -> allows > flattening of fields/array elements > - opt-out of atomic updates (use non-atomic) -> allows flattening > of types that are bigger than N (where N defined by the JVM) > > You can see this as a series of step that takes you, smoothly, from an > identity class such as java.util.ArrayList, down to something simpler > and simpler, until you get to int, a primitive type (which, not > coincidentally, has given up exactly the same guarantees listed above!). > That is, maximum flattening (e.g. int-like) is not a simple on/off switch, > but, rather, a property that emerges when the treatment of the value being > flattened can be relaxed along *all* the axis shown above. This model > is described, in much better words than the ones I used here, in Brian?s > document [2]. > > Maurizio > > [1] - https://openjdk.org/jeps/8316779 > [2] - > https://github.com/openjdk/valhalla-docs/blob/main/site/design-notes/state-of-valhalla/02-object-model.md > ? > -- Phone: (416) 450-3584 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From liangchenblue at gmail.com Fri Jan 19 20:27:18 2024 From: liangchenblue at gmail.com (-) Date: Fri, 19 Jan 2024 14:27:18 -0600 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: Message-ID: Hi John, For the implicit constructor, it really means to indicate that the zero default is available. The ability to null-restrict such a field/array element is implied from this zero default, because now such a default can replace the null when you have a few zero-filled byte representation of an object. You shouldn't declare such a constructor solely for the ability to null-restrict. As I can see, you love nulls as a default to ensure safety from your previous comments, and Valhalla designers do too! The point is while null-restriction is alluring for you to achieve max optimization, you shouldn't always go for it. An example from Valhalla is that LocalDateTime, despite being value-based and a candidate for value class migration, should not have a zero instance, as a zero at 1970-1-1 is a valid value and is not suitable to indicate an invalid date time like null is. So if we don't want a null Range to be interpreted as [0, 0], we won't declare this Range to have an implicit constructor. A positive example for an implicit constructor would be post-migration Optional: the default instance will contain a null pointer, equivalent to current Optional.empty(), and it's a good default value in place of null. So we might say, without null restriction, don't we lose so many chances of optimization? Luckily, modern hardware allows our N (from Maurizio's flattening upper limit) to be quite large, and sometimes there are free bits in the layout (like from boolean fields) allowing us to represent a null without extra space. For your question about Long![] array, it will be almost the same as a long[] array. Maybe the only difference is that Long! won't allow word tearing specification-wise while long does, a relic from the times of birth of the Java programming language. Per valhalla expert group members, the last platform that really needed to tear long was like some 2013 robot platform, so I think asking for more de-jure safety in Long! compared to long is not a bad thing. If you need nulls, you still have to resort to a Long[] array, or an OptionalLong![] array. On Fri, Jan 19, 2024 at 1:21?PM John Bossons wrote: > Hi Liang Chen, > > I totally agree with your earlier comment re more efficient checking. The > big win comes from value classes. Null-restricted classes are a lesser but > useful additional win. But I remain concerned about catching an undefined > element of a Long![] array. Maybe we just have to wait for 128-bit word > sizes, so we don't need null restriction for boxed primitives! > > I think you are convincing me that any array of 'large' null-restricted > elements is unsafe. Maybe that's the answer -- Leave null-restricted types > as defined, but don't implement arrays of null-restricted elements larger > than ints! Or, to put this in more acceptable terms, let the JVM make > that decision. > > John > > PS: I still maintain that use of a public implicit constructor to > indicate null-restriction is not a good choice, because of the constraints > it imposes on developers. See Effective Java, Item 1, for the design > pattern it renders ineffective. A marker interface (analogous to > Serializable or Iterable) would avoid this effect. > > On Fri, Jan 19, 2024 at 12:52?PM - wrote: > >> >> >> On Fri, Jan 19, 2024 at 10:07?AM John Bossons wrote: >> >>> Thanks for your comments. I was not sufficiently explicit. >>> >>> Let me focus on implicit. I guess my dislike is of introducing a 'fake' >>> constructor into the definition of a class. I say 'fake' because, as I >>> understand it, the only purpose of the implicit constructor is to indicate >>> to the JVM/compiler that a never-null instance can be created. But in Java >>> idiom that means that a developer can invoke the public implicit >>> constructor, which will cause confusion. >>> >> It is a real constructor like that default no-arg one generated by javac, >> and developers always CAN invoke that constructor. It is, however, a >> watered-down version, because our existing default one can perform side >> effects like `private List values = new ArrayList<>();` injected >> to the end of constructor, while the implicit one must give it up so that >> JVM can construct zero instances cheaply yet correctly. As a result, this >> constructor cannot declare any side-effect code or declare a custom >> superconstrctor call, so it will look like a "fake" one, yet it is no >> different from a true constructor. >> >>> >>> Maybe it would be better to require a potentially null-restricted class >>> to extend a marker interface ('extends NeverNullPossible'? Or maybe, >>> looking ahead to my next comment, 'extends AllZerosIsNull'?). That would >>> enable the compiler to catch an invalid use of the ! marker in a >>> declaration, just as the proposed implicit constructor does, while >>> conforming better to common Java idiom. >>> >> A zero instance is a class capacity indeed, and such classes must be >> final. My only cent against marker interfaces is that I don't think Java >> compiler ever emits errors simply because your class implements an >> unsuitable interface. >> >>> >>> My further suggestion is that appending ! to a type should mean that the >>> default initialized value of an instance (all fields zero) is equivalent to >>> null, so that >>> Range![] a = new Range![100]; // allocated with zero values >>> System.out.println(a[5]); // throws NullPointerException >>> (zero fields) >>> This better conforms to current idiom, where the initial initialization >>> is with nulls and the println invocation on a null array element or field >>> throws a NPE. >>> >> Consider this: how would you differentiate a null Range versus a Range[0, >> 0]? Are both all zero bits? >> This is where the zero instance starts: before anything, the zero >> instance has always been a VALID instance of an object, yet its inlined >> representation will be all zero bits, which means it will coincide with >> null; thus, we introduce the null-restricted concept to avoid the >> performance pitfalls we will suffer to represent a null. >> >> Also adding on to Anh Mai's comment, recall that Range is a value class >> (a prerequisite to null-restriction) so its identity doesn't matter; the VM >> is totally permitted inline the null-friendly range array with 9-byte units >> (8 byte + single bit indicating nullity), and it is still somewhat a memory >> win over linking to regular objects. But we might need some fine-grained >> control to ensure VM allocates an inlined array instead of a pointer array >> in this case. In this case, testing null and throwing NPE would be simply >> checking one bit, which is more reliable than scanning a whole byte >> interval too. >> >>> >>> As you say, my suggestion means runtime testing to determine if all >>> fields are zero, which has a performance cost. This will only occur if the >>> JVM implements the ! specification, which it presumably will only do if the >>> object is small. And the cost will be small (I am presuming) relative to >>> savings from allowing the memory footprint to match that of primitives. Am >>> I wrong? There is value in conforming to current idiom. >>> >>> Turning to the LooselyConsistentValue, I withdraw my comments. I >>> mistakenly presumed that its use would be required, which is false. It >>> simply enables a single-threaded (or volatile-protected) application to >>> allow additional inlining, which is harmless. >>> >> >>> John >>> >>> On Thu, Jan 18, 2024 at 4:56?PM - wrote: >>> >>>> Hi John, >>>> >>>> On Thu, Jan 18, 2024 at 2:30?PM John Bossons >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> Maybe I am missing something, but the proposal seems to be trying to >>>>> do too much. >>>>> >>>>> Specifically: Why not simply provide that appending ! to a type >>>>> specification for an object (field, array element, or parameter) means that >>>>> that the object is not only null-restricted but also never zero and >>>>> necessarily non-atomic unless small? >>>>> >>>> First, a reminder that some objects cannot be non-atomic, mostly when >>>> fields have dependencies/constraints on each other: if you have a range, >>>> you cannot allow its lower bound to be larger than its upper bound. >>>> Non-atomic representations cannot avoid this pitfall. Also you seem >>>> to misunderstand non-atomic: if an object is non-atomic, each of its fields >>>> can update independently from each other, so a 3-d position can be >>>> non-atomic, but not so for a range. Non-atomicity is dangerous, and it >>>> should not be the default. However, if an atomic class is small enough, >>>> like OptionalInt (as now many architecture has like atomic handling of 16 >>>> bytes etc.) JVM may choose to apply non-atomic optimizations to them for >>>> better performance without violating their object constraints. >>>> >>>>> >>>>> Why complicate the specification with an implicit constructor that a >>>>> developer will never explicitly invoke? Why permit a developer to 'opt in' >>>>> to non-atomic? >>>>> >>>> The implicit constructor can always be called; its existence asks >>>> programmers to affirm that the zero-filled inlined instance is a valid >>>> instance. And this instance is different from a null, as null is a pointer, >>>> yet the zero-instance has a different size defined by the class layout in >>>> the stack/heap. >>>> >>>>> >>>>> Sure, that means trying to read a zero value triggers a NPE. That just >>>>> means that a type that can legitimately have a zero value cannot be >>>>> specified as null-restricted, since a zero value (e.g. a {null, null} Name) >>>>> is the equivalent of a null unrestricted value object. Why go beyond that? >>>>> If a non-null zero value is possible, the type cannot be null-restricted >>>>> and so can only be an unrestricted JEP 401 value type. End of story. >>>>> >>>> You see the inlined zero instance and the null pointer have different >>>> sizes, and thus they are not exchangeable. Converting the inlined zero >>>> instance to null to throw NPE is complex and hurtful to performance as you >>>> will scan unrelated bits for almost every field access. >>>> >>>> And for unrestricted value type, yes, they exist and can possibly be >>>> inlined as well if the restricted type is small enough (i.e. has space for >>>> extra bit indicating nullity) But reminder, the nullity bit itself isn't >>>> even non-atomic with (depends on) the rest of the object! You don't want >>>> the nullity to indicate null while the rest of the object indicate some >>>> sort of non-null value, which can happen in a non-atomic context. >>>> >>>>> >>>>> With respect to non-atomic, what is new? Yes, unexpected instances >>>>> may occur without synchronization if the object is larger than the word >>>>> size of the implementation. Why do we need to extend a >>>>> LooselyConsistentValue interface to know/permit that? >>>>> >>>> Unexpected instances don't occur without synchronization if you use >>>> finals, such as in Java's String or immutable List.of(). These APIs may >>>> capture any "permitted value" from the arrays passed in, but once >>>> constructed, the captured value remains constant no matter which thread >>>> observes the String/List object reference. (Technically, JVM implements >>>> this with a store-store fence between end of field writes in the >>>> constructor and object reference is shared anywhere, and a load-load fence >>>> between object reference read and field read) Value classes is about the >>>> safety of final fields in programming instead of the close encounter of >>>> third kinds of synchronization, volatiles, and fences. >>>> >>>>> >>>>> Can we not keep this 'simple' (if that word has meaning in this >>>>> context)? What am I missing? >>>>> >>>> I think you are missing a bit about how the layout (inlining is >>>> represented in memory) and value classes (the thread safety its final >>>> offers) work, and what "non-atomic" means. Feel free to question more. >>>> >>>>> >>>>> John >>>>> >>>>> >>>>> -- >>>>> Phone: (416) 450-3584 (cell) >>>>> >>>> >>> >>> -- >>> Phone: (416) 450-3584 (cell) >>> >> > > -- > Phone: (416) 450-3584 (cell) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Fri Jan 19 21:34:50 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Fri, 19 Jan 2024 16:34:50 -0500 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: <42016d8e-edca-49b4-b3ba-86ae1e32e1a3@oracle.com> Message-ID: > Thanks for your comment and the reference to Brian's 2021 document. > You're right about the series of steps, I get that. My comments on > undefined array elements really come down?to saying that it's bad > practice to specify arrays of null-restricted elements except where > you're sure undefined elements are never read before being defined. I think part of why you are confused here is that you are looking at the features individually, rather than the big picture. Identity restriction is both semantically useful, and potentially admits certain optimizations. Implicit construction is semantically useful (it says "you can use me without initialization", just as types like `int` do), and potentially admits certain optimizations. Non-nullity is semantically useful, and potentially admits certain optimizations. We do not want to define "performance features" here; we want to define semantically useful features of classes, for which we can reliably perform desired optimizations when the right semantic constraints are in place.? The hard part was identifying which semantic features those were. Fields of type `String!` (and arrays whose elements are `String!`) are semantically useful (lots of people would love to have them even if they never get value types), even though we can't flatten them at all.? But to ensure their usefulness, we need to ensure they are initialized before use (or, tolerate NPEs in badly written programs, such as when `this` escapes construction.) On the other hand, fields of type `Integer!` (or arrays whose elements are `Integer!`) can be flattened safely regardless of races, this-escapes, etc -- because `Integer` is implicitly constructible.? (And, this also allows us to guarantee that it will *never* be null, as opposed to the `String!` case, where such a guarantee always comes with caveats.)? But the cost of this is that we have to know which classes are like String and which are like Integer, and the implicit constructor is what signals this. On the other other hand, fields/arrays of `LocalDate!` _could_ be flattened, but only if we are absolutely 100% sure (no caveats) that they are guaranteed to be initialized before writing.? (Failure to get this right is much worse than an errant NPE; we've violated the integrity of the object model.) We want the language and runtime to have access to all the information it needs to provide good performance and safety in all of these cases.? You might dislike the syntax choice behind implicit constructors (maybe you prefer a class modifier like `implicitly-constructible`), but this isn't the place for syntax debates.? The important thing here is that whether a class can be implicitly constructed or not appears, after many many hours of analysis, to be an imcompressible element of the design, unless we are willing to compromise on either safety or flattening.? It is natural and healthy to wonder "can this be simplified further" ... but best to do so with awareness that we may be staring at Chesterton's Fence (https://fs.blog/chestertons-fence/). From jbossons at gmail.com Fri Jan 19 21:54:11 2024 From: jbossons at gmail.com (John Bossons) Date: Fri, 19 Jan 2024 16:54:11 -0500 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: <42016d8e-edca-49b4-b3ba-86ae1e32e1a3@oracle.com> Message-ID: Fair enough, and the Chesterton analogy is a good summary. This exchange has been useful (at least for me) in clarifying the issues. On Fri, Jan 19, 2024 at 4:34?PM Brian Goetz wrote: > > > Thanks for your comment and the reference to Brian's 2021 document. > > You're right about the series of steps, I get that. My comments on > > undefined array elements really come down to saying that it's bad > > practice to specify arrays of null-restricted elements except where > > you're sure undefined elements are never read before being defined. > > I think part of why you are confused here is that you are looking at the > features individually, rather than the big picture. > > Identity restriction is both semantically useful, and potentially admits > certain optimizations. > Implicit construction is semantically useful (it says "you can use me > without initialization", just as types like `int` do), and potentially > admits certain optimizations. > Non-nullity is semantically useful, and potentially admits certain > optimizations. > > We do not want to define "performance features" here; we want to define > semantically useful features of classes, for which we can reliably > perform desired optimizations when the right semantic constraints are in > place. The hard part was identifying which semantic features those were. > > Fields of type `String!` (and arrays whose elements are `String!`) are > semantically useful (lots of people would love to have them even if they > never get value types), even though we can't flatten them at all. But > to ensure their usefulness, we need to ensure they are initialized > before use (or, tolerate NPEs in badly written programs, such as when > `this` escapes construction.) > > On the other hand, fields of type `Integer!` (or arrays whose elements > are `Integer!`) can be flattened safely regardless of races, > this-escapes, etc -- because `Integer` is implicitly constructible. > (And, this also allows us to guarantee that it will *never* be null, as > opposed to the `String!` case, where such a guarantee always comes with > caveats.) But the cost of this is that we have to know which classes > are like String and which are like Integer, and the implicit constructor > is what signals this. > > On the other other hand, fields/arrays of `LocalDate!` _could_ be > flattened, but only if we are absolutely 100% sure (no caveats) that > they are guaranteed to be initialized before writing. (Failure to get > this right is much worse than an errant NPE; we've violated the > integrity of the object model.) > > We want the language and runtime to have access to all the information > it needs to provide good performance and safety in all of these cases. > You might dislike the syntax choice behind implicit constructors (maybe > you prefer a class modifier like `implicitly-constructible`), but this > isn't the place for syntax debates. The important thing here is that > whether a class can be implicitly constructed or not appears, after many > many hours of analysis, to be an imcompressible element of the design, > unless we are willing to compromise on either safety or flattening. It > is natural and healthy to wonder "can this be simplified further" ... > but best to do so with awareness that we may be staring at Chesterton's > Fence (https://fs.blog/chestertons-fence/). > -- Phone: (416) 450-3584 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From maurizio.cimadamore at oracle.com Fri Jan 19 21:58:58 2024 From: maurizio.cimadamore at oracle.com (Maurizio Cimadamore) Date: Fri, 19 Jan 2024 21:58:58 +0000 Subject: Null-restricted types: Why so complicated? In-Reply-To: References: <42016d8e-edca-49b4-b3ba-86ae1e32e1a3@oracle.com> Message-ID: <6fe999e7-ed5e-4ea8-8ef6-e5b6dc7e9a79@oracle.com> On 19/01/2024 19:34, John Bossons wrote: > it's bad practice to specify arrays of null-restricted elements except > where you're sure undefined elements are never read before being defined Counter-argument: it is relatively common to create e.g. an array of ints, which is then only populated sparsely. When you create an int[], you can read its elements (and get a zero out), you don't get any exception. Thanks to having defined values "the right way", that's exactly what you get when you read a freshly minted V![] (Subtle bonus point: because of this, `int` is effectively an alias for `!Integer`). We could have designed this differently, but then we would have ended up in a place which was neither classes, nor primitives. Maurizio From wilsons2401 at outlook.com Sat Jan 20 07:00:31 2024 From: wilsons2401 at outlook.com (Smith Wilson) Date: Sat, 20 Jan 2024 07:00:31 +0000 Subject: Value types design decisions Message-ID: Hello Valhalla community, I have reviewed the latest documents on the project and have some concerns about current design decisions. 1) Implicit constructor cannot have body. While I understand why we need implicit ctor and why it must have no args, I still don't understand why we can't allow users to set default values on their own using that ctor instead of implicitly defaulting them to (0, false, null). It could solve problems with such classes like LocalDate which require more special zeroInstance (1970-1-1) than value with pure defaults. I believe that even String class could be somehow adopted to be value class in future having default null-restricted value of "" with (byte[] value) assigned to empty byte array. Non-final fields (hash, hashIsZero) could be placed into separate final field (i.e. hashHolder) given that it will be forcedly flattened, so there will be no overhead. 2) Implicit constructor must be public. We all learned that overextending public apis can lead to problems. Great example of this is wrapper classes whose constructors have been deprecated long time ago and still causing a lot of problems. So, new classes (i.e. Optional) were designed to have static factories (Optional#of, Optional#empty), rather than exposing their internal constructors. (Moreover, constructor calls use different byte-code instructions than method calls, which also can cause byte-code incompatibilities in case of future migrations.) I don't understand why we are going to make same mistake again and why we can't allow implicit constructors to have any kind of visibility modifier. So, VM will be able to freely use zeroInstances where necessary, while user himself will be able to control use of class exposing special apis. 3) Using interface (LooselyConsistentValue) to express non-atomicness. Same story as with Serializable interface. It is considered that using marker-interface for such a problem was bad design. Although this was justified by the fact that we did not have annotations, it is now unclear what makes us to use interfaces again. While it is possible to come up with real-life use cases of Serializable where type restrictions may be required (some usage of ObjectI/OStream apis), for such VM-close features like non-atomicness there is no real need for such opportunity. (Moreover, we already have some inconveniences because of that. In some cases, type inference of "var" is already blowing up from large type unions like Number & Comparable & Serializable.) So, I believe we should use alternatives like class modifier or annotation, rather than polluting the type system for no reason. Regards, Wilson. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sat Jan 20 15:24:33 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Sat, 20 Jan 2024 10:24:33 -0500 Subject: Value types design decisions In-Reply-To: References: Message-ID: <0cbe6955-44f3-44f1-a05e-0ffc9b6d2013@oracle.com> (1) and (2) are really the same question, and reflect a misunderstanding of what an implicit constructor is.? I think what you are hoping for is that you would be able to select the _default value_ used to initialize freshly allocated memory, rather than the VM-selected all-zero.? This is not what an implicit constructor means, because, unfortunately, this is not consistent with the physics of the problem; the VM initializes memory with a wide roller, not a fine brush.? While I can certainly sympathize with the _desire_ to have this, it's not really practical. So what *is* an implicit constructor?? First, let's understand what happens when you say `new Foo(3)`.? An uninitialized object is allocated and zeroed out, and then the Foo(int) constructor is run to initialize its state.? The runtime takes some care (though badly written constructors can undermine this) to ensure that the object is not exposed to "regular code" before initialization has completed. An implicit constructor is a statement by the class author that "calling the constructor is optional"; in other words, that the uninintialized object allocated by the JVM is a perfectly valid instance, and a good default value.? (Many important classes fit this description, such as nearly all numerics, and some library classes like Duration and Optional.)? In turn, this means it is safe for the VM to expose an otherwise-uninitialized object, which in turn, enables richer flattening optimizations.? Since it is a statement that "construction is optional", having a body would make no sense, since the constructor might never be called.? And since construction is optional, the "make me one without construction" is an inherently public operation (to the extent that the class itself is public).? You can't restrict it without restricting things like "no one can declare a field of type Foo". We went back and forth between representing this as a special constructor, and representing this as a modifier on the class (e.g., `implicitly-constructible`.)? Each had pros and cons (but this isn't the time or place to discuss syntax choices).? I suspect that you (though not everyone) would have found the "class modifier" route to be more evocative of what is really going on, though had we gone the other way, there would be different complaints. Of course, you _can_ omit the implicit constructor, and write an ordinary no-arg constructor that lets you set your preferred "default" values.? But the cost of that might be not getting all the memory flattening you might hope for.? Valhalla, unlike some other projects, is more constrained by the physics of object layout and allocation, and as such, exposes some difficult tradeoffs.? (Previously, the VM made these tradeoffs for you, often conservatively.) To your third question, yes, this syntax is controversial.? (The alternatives are not necessarily better, and everything you propose has already been discussed.? As always, we may reconsider this after gathering more experience.) (As an aside, I would suggest that when confronted with something that seems confusing or surprising, to strive to fully understand before reaching for language like "make the same mistake again.") Cheers, -Brian On 1/20/2024 2:00 AM, Smith Wilson wrote: > Hello Valhalla community, > I have reviewed the latest documents on the project and have some > concerns about current design decisions. > > 1)?Implicit constructor cannot have body. > While I understand why we need implicit ctor and why it must have no > args, I still don't understand why we can't allow users to set default > values on their own using that ctor instead of implicitly defaulting > them to (0, false, null). It could solve problems with such classes > like LocalDate which require more special zeroInstance (1970-1-1) than > value with pure defaults. > I believe that even String class could be somehow adopted to be value > class in future having default null-restricted value of "" with > (byte[] value) assigned to empty byte array. Non-final fields (hash, > hashIsZero) could be placed into separate final field (i.e. > hashHolder) given that it will be forcedly flattened, so there will be > no overhead. > > 2) Implicit constructor must be public. > We all learned that overextending public apis can lead to problems. > Great example of this is wrapper classes whose constructors have been > deprecated long time ago and still causing a lot of problems. So, new > classes (i.e. Optional) were designed to have static factories > (Optional#of, Optional#empty), rather than exposing their internal > constructors. (Moreover, constructor calls use different byte-code > instructions than method calls, which also can cause byte-code > incompatibilities in case of future migrations.) > I don't understand why we are going to make same mistake again and why > we can't allow implicit constructors to have any kind of visibility > modifier. So, VM will be able to freely use zeroInstances where > necessary, while user himself will be able to control use of class > exposing special apis. > > 3) Using interface (LooselyConsistentValue) to express non-atomicness. > Same story as with Serializable interface. It is considered that using > marker-interface for such a problem was bad design. Although this was > justified by the fact that we did not have annotations, it is now > unclear what makes us to use interfaces again. While it is possible to > come up with real-life use cases of Serializable where type > restrictions may be required (some usage of ObjectI/OStream apis), for > such VM-close features like non-atomicness there is no real need for > such opportunity. (Moreover, we already have some inconveniences > because of that. In some cases, type inference of "var" is already > blowing up from large type unions like Number & Comparable & > Serializable.) > So, I believe we should use alternatives like class modifier or > annotation, rather than polluting the type system for no reason. > > Regards, > Wilson. -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at noctarius.com Sat Jan 20 15:38:39 2024 From: me at noctarius.com (Christoph Engelbert) Date: Sat, 20 Jan 2024 16:38:39 +0100 Subject: Value types design decisions In-Reply-To: <0cbe6955-44f3-44f1-a05e-0ffc9b6d2013@oracle.com> References: <0cbe6955-44f3-44f1-a05e-0ffc9b6d2013@oracle.com> Message-ID: <83D4F646-B13A-4D0D-8D48-5CC04D60D6A4@noctarius.com> Hey Wilson, As an addition to what Brian said, it would be totally possible to write your Date class in a way that an all-zero?d value would be interpreted as 1970-01-01. In this case the internal representation would not represent the actual items of the date, but the offset to some specific epoch (which is what most date implementations do anyways). In your case an all-zero offset means no offset to the epoch which starts at midnight Jan 1st, 1970. I think that is something to keep in mind. If my default can be considered to be some kind of offset, my zero?d value can always represent that default. Cheers, Chris > On Jan 20, 2024, at 16:24, Brian Goetz wrote: > > (1) and (2) are really the same question, and reflect a misunderstanding of what an implicit constructor is. I think what you are hoping for is that you would be able to select the _default value_ used to initialize freshly allocated memory, rather than the VM-selected all-zero. This is not what an implicit constructor means, because, unfortunately, this is not consistent with the physics of the problem; the VM initializes memory with a wide roller, not a fine brush. While I can certainly sympathize with the _desire_ to have this, it's not really practical. > > So what *is* an implicit constructor? First, let's understand what happens when you say `new Foo(3)`. An uninitialized object is allocated and zeroed out, and then the Foo(int) constructor is run to initialize its state. The runtime takes some care (though badly written constructors can undermine this) to ensure that the object is not exposed to "regular code" before initialization has completed. > > An implicit constructor is a statement by the class author that "calling the constructor is optional"; in other words, that the uninintialized object allocated by the JVM is a perfectly valid instance, and a good default value. (Many important classes fit this description, such as nearly all numerics, and some library classes like Duration and Optional.) In turn, this means it is safe for the VM to expose an otherwise-uninitialized object, which in turn, enables richer flattening optimizations. Since it is a statement that "construction is optional", having a body would make no sense, since the constructor might never be called. And since construction is optional, the "make me one without construction" is an inherently public operation (to the extent that the class itself is public). You can't restrict it without restricting things like "no one can declare a field of type Foo". > > We went back and forth between representing this as a special constructor, and representing this as a modifier on the class (e.g., `implicitly-constructible`.) Each had pros and cons (but this isn't the time or place to discuss syntax choices). I suspect that you (though not everyone) would have found the "class modifier" route to be more evocative of what is really going on, though had we gone the other way, there would be different complaints. > > Of course, you _can_ omit the implicit constructor, and write an ordinary no-arg constructor that lets you set your preferred "default" values. But the cost of that might be not getting all the memory flattening you might hope for. Valhalla, unlike some other projects, is more constrained by the physics of object layout and allocation, and as such, exposes some difficult tradeoffs. (Previously, the VM made these tradeoffs for you, often conservatively.) > > To your third question, yes, this syntax is controversial. (The alternatives are not necessarily better, and everything you propose has already been discussed. As always, we may reconsider this after gathering more experience.) > > (As an aside, I would suggest that when confronted with something that seems confusing or surprising, to strive to fully understand before reaching for language like "make the same mistake again.") > > Cheers, > -Brian > > On 1/20/2024 2:00 AM, Smith Wilson wrote: >> Hello Valhalla community, >> I have reviewed the latest documents on the project and have some concerns about current design decisions. >> >> 1) Implicit constructor cannot have body. >> While I understand why we need implicit ctor and why it must have no args, I still don't understand why we can't allow users to set default values on their own using that ctor instead of implicitly defaulting them to (0, false, null). It could solve problems with such classes like LocalDate which require more special zeroInstance (1970-1-1) than value with pure defaults. >> I believe that even String class could be somehow adopted to be value class in future having default null-restricted value of "" with (byte[] value) assigned to empty byte array. Non-final fields (hash, hashIsZero) could be placed into separate final field (i.e. hashHolder) given that it will be forcedly flattened, so there will be no overhead. >> >> 2) Implicit constructor must be public. >> We all learned that overextending public apis can lead to problems. Great example of this is wrapper classes whose constructors have been deprecated long time ago and still causing a lot of problems. So, new classes (i.e. Optional) were designed to have static factories (Optional#of, Optional#empty), rather than exposing their internal constructors. (Moreover, constructor calls use different byte-code instructions than method calls, which also can cause byte-code incompatibilities in case of future migrations.) >> I don't understand why we are going to make same mistake again and why we can't allow implicit constructors to have any kind of visibility modifier. So, VM will be able to freely use zeroInstances where necessary, while user himself will be able to control use of class exposing special apis. >> >> 3) Using interface (LooselyConsistentValue) to express non-atomicness. >> Same story as with Serializable interface. It is considered that using marker-interface for such a problem was bad design. Although this was justified by the fact that we did not have annotations, it is now unclear what makes us to use interfaces again. While it is possible to come up with real-life use cases of Serializable where type restrictions may be required (some usage of ObjectI/OStream apis), for such VM-close features like non-atomicness there is no real need for such opportunity. (Moreover, we already have some inconveniences because of that. In some cases, type inference of "var" is already blowing up from large type unions like Number & Comparable & Serializable.) >> So, I believe we should use alternatives like class modifier or annotation, rather than polluting the type system for no reason. >> >> Regards, >> Wilson. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sat Jan 20 19:22:16 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Sat, 20 Jan 2024 14:22:16 -0500 Subject: Value types design decisions In-Reply-To: References: Message-ID: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> Forgot to address this part. On 1/20/2024 2:00 AM, Smith Wilson wrote: > It could solve problems with such classes like LocalDate which require > more special zeroInstance (1970-1-1) than value with pure defaults. The key thing to realize about LocalDate is that while it is a good candidate for being a value type, there is *literally no* good default value.? Jan 1 1970 is not a good default value, as we've all seen by getting emails that tell us our subscriptions are going to expire 54 years ago.? No other specific date is a good default either.? This is a statement about the _domain_; there is no good default.? (And when an abstraction does not have a good default, null is your friend.) Other classes have good defaults; Duration is such an example. These two examples (conveniently, from the same package) tell us a truth about modeling data with value objects: some classes have reasonable defaults (and we can exploit this to optimize their treatment, and can freely let people use uninitialized variables, as we do for primitives today), and some do not (and hence we need either to engage nullability, or work very hard to ensure that an object cannot possibly be used uninitialized. The reason that "implicit constructor" is a thing is that, even after we've identified that a class is a value class, we still have to identify something else: whether it is implicitly constructible or requires explicit construction.? (There's other ways to frame it, of course, but the key is that this has to be in the programming model.) From davidalayachew at gmail.com Sat Jan 20 19:41:42 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Sat, 20 Jan 2024 14:41:42 -0500 Subject: Value types design decisions In-Reply-To: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> References: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> Message-ID: Hello Brian, Just wanted to ask about this point below. > These two examples (conveniently, from the same package) > tell us a truth about modeling data with value objects: > some classes have reasonable defaults (and we can exploit > this to optimize their treatment, and can freely let > people use uninitialized variables, as we do for > primitives today), and some do not (and hence we need > either to engage nullability, or work very hard to ensure > that an object cannot possibly be used uninitialized. Oh woah, what does it look like to prevent an object to be used uninitialized? We don't have any compiler-enforced ways of doing this, right? You are talking about just manually code reviewing each use of the class to try and hope to catch any misuse? And if you mean manual review, is that really something that would be worth the effort? Aside from the most trivial situations, I can't think of an example. Thank you for your time and help! David Alayachew On Sat, Jan 20, 2024 at 2:23?PM Brian Goetz wrote: > Forgot to address this part. > > On 1/20/2024 2:00 AM, Smith Wilson wrote: > > It could solve problems with such classes like LocalDate which require > > more special zeroInstance (1970-1-1) than value with pure defaults. > > The key thing to realize about LocalDate is that while it is a good > candidate for being a value type, there is *literally no* good default > value. Jan 1 1970 is not a good default value, as we've all seen by > getting emails that tell us our subscriptions are going to expire 54 > years ago. No other specific date is a good default either. This is a > statement about the _domain_; there is no good default. (And when an > abstraction does not have a good default, null is your friend.) > > Other classes have good defaults; Duration is such an example. > > These two examples (conveniently, from the same package) tell us a truth > about modeling data with value objects: some classes have reasonable > defaults (and we can exploit this to optimize their treatment, and can > freely let people use uninitialized variables, as we do for primitives > today), and some do not (and hence we need either to engage nullability, > or work very hard to ensure that an object cannot possibly be used > uninitialized. > > The reason that "implicit constructor" is a thing is that, even after > we've identified that a class is a value class, we still have to > identify something else: whether it is implicitly constructible or > requires explicit construction. (There's other ways to frame it, of > course, but the key is that this has to be in the programming model.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From livedinsquares at gmail.com Sat Jan 20 19:43:15 2024 From: livedinsquares at gmail.com (Jonathan F) Date: Sat, 20 Jan 2024 19:43:15 +0000 Subject: The idea of implicit vs default Message-ID: I know the EG have considered pretty much everything (I?ve followed the forums all along) but I haven?t seen this discussed yet, so here goes. It seems to me the novel concept of implicit for constructors may not be necessary, as (IMHO) it depends on the story that?s told about how fields are initialised. Can?t we say the following instead? For a value class Point, if it has the implicit constructor it also has a quasi-static field Point.default (I think that?s still the plan). I want to say this constructor only has 2 things special about it: it has no body, and it?s used to initialise Point.default when the class is initialised. Later, fields and array elements of type Point! are (notionally) initialised to Point.default, not by calling the constructor, which seems more like reality and like how other fields are set to null/0. So there?s nothing implicit about the constructor itself; and the creation of Point.default isn?t magic either, it?s like the creation of Point.class or enum constants - ?just one of those things? that happens when a class initialises. If that makes sense, I?d go for the previous constructor syntax public default Point(). Meaning simply this is the constructor used for Point.default. Or maybe even public default-0 Point() if it?s called the ?zero instance?. As a separate but related idea: assuming the language has Point.default, it could be better to have syntax like Point!.default for the object, with Point.default meaning null (as these value classes will effectively have 2 defaults), and in general T.default for any type, e.g. double.default. That seems useful in itself, but it would also allow the simple explanation that a field or array element of _any_ type T gets initialised to T.default. Even if that?s not workable, it would make me uncomfortable if Point.default is non-null when a Point field actually defaults to null! best wishes, JF -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sat Jan 20 20:17:28 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Sat, 20 Jan 2024 15:17:28 -0500 Subject: Value types design decisions In-Reply-To: References: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> Message-ID: This interacts with nullity control.? If a variable is nullable, its default value is null -- end of story.? But nullability is often (not always) an impediment to flattening.? So if you want a flat field / array, you probably are using a non-nullable field/component type.? Of the three kinds (identity (String), implicitly constructible value (Integer), non-implicitly-constructible value (LocalDate)), we get three different stories about flattening and failure modes: String![] -- these will never be flattened, so the default value of elements is still null.? We have some design choices about whether to make "good faith efforts" for preventing these from being published before being fully ininitialized, but if one does escape, the downside is not terrible -- an unexpected NPE, but no loss of integrity. Integer![] -- these can get flattened routinely, and it's perfectly fine to publish an array whose elements are uninitialized, because Integer is implicitly constructible, which means the default value is just fine. LocalDate![] -- We can flatten these only if we have 100% ironclad proof that every element has been initialized to a valid value prior to publication.? Allowing the zero to escape undermines a VM integrity promise, and this can never happen. As to "how would this work", design discussions are ongoing, stay tuned. On 1/20/2024 2:41 PM, David Alayachew wrote: > Hello Brian, > > Just wanted to ask about this point below. > > > These two examples (conveniently, from the same package) > > tell us a truth about modeling data with value objects: > > some classes have reasonable defaults (and we can exploit > > this to optimize their treatment, and can freely let > > people use uninitialized variables, as we do for > > primitives today), and some do not (and hence we need > > either to engage nullability, or work very hard to ensure > > that an object cannot possibly be used uninitialized. > > Oh woah, what does it look like to prevent an object to be used > uninitialized? We don't have any compiler-enforced ways of doing this, > right? You are talking about just manually code reviewing each use of > the class to try and hope to catch any misuse? > > And if you mean manual review, is that really something that would be > worth the effort? Aside from the most trivial situations, I can't > think of an example. > > Thank you for your time and help! > David Alayachew > > On Sat, Jan 20, 2024 at 2:23?PM Brian Goetz > wrote: > > Forgot to address this part. > > On 1/20/2024 2:00 AM, Smith Wilson wrote: > > It could solve problems with such classes like LocalDate which > require > > more special zeroInstance (1970-1-1) than value with pure defaults. > > The key thing to realize about LocalDate is that while it is a good > candidate for being a value type, there is *literally no* good > default > value.? Jan 1 1970 is not a good default value, as we've all seen by > getting emails that tell us our subscriptions are going to expire 54 > years ago.? No other specific date is a good default either. This > is a > statement about the _domain_; there is no good default.? (And when an > abstraction does not have a good default, null is your friend.) > > Other classes have good defaults; Duration is such an example. > > These two examples (conveniently, from the same package) tell us a > truth > about modeling data with value objects: some classes have reasonable > defaults (and we can exploit this to optimize their treatment, and > can > freely let people use uninitialized variables, as we do for > primitives > today), and some do not (and hence we need either to engage > nullability, > or work very hard to ensure that an object cannot possibly be used > uninitialized. > > The reason that "implicit constructor" is a thing is that, even after > we've identified that a class is a value class, we still have to > identify something else: whether it is implicitly constructible or > requires explicit construction.? (There's other ways to frame it, of > course, but the key is that this has to be in the programming model.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sat Jan 20 20:48:15 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Sat, 20 Jan 2024 15:48:15 -0500 Subject: The idea of implicit vs default In-Reply-To: References: Message-ID: <7ddbd79b-ba5f-4cbf-a51f-40b9f6a7f9a0@oracle.com> This is a nice idea, and it has come around several times in the design discussions.? From a the-system-stops-at-the-source-code perspective, it seems fine; you declare a constructor to make "the default value", and arrange that this constructor is only ever called once (during class preparation, most likely), to initialize the "stamp".? Then you use the stamp to stamp out default values.? Easy, right? The only problem with this is that it is on a collision course with VM physics.? Many of the integrity invariants, as well as many optimizations, are built on the assumption that initialization of a new object or array is always done with the zero-colored paint roller.? If you want it done by stamps, that means not only do you have a stamp for Point, but you also need a bigger stamp for everything that has a Point flattened into it, and you have much more data motion for every object initialization.? And initializing arrays means N individual stampings, rather than just a broad stroke with the zero roller -- even more data motion.? Plus, there is more room for races, and therefore more work to prevent such races, when initializing by copying from an exemplar than initializing with bzero.? And given that performance is a primary motivation for value classes, adding cost to initializing values would really have to pay off big to carry its weight. But the problem doesn't stop there, because some value classes simply have no good default, whether it is a physical zero or not.? LocalDate is an example of such a class. So after considering this multiple times, our conclusion is that while maybe it could be done, but even if so, it only solves part of the problem, and the cost would be unreasonably far out of line with the benefit. On 1/20/2024 2:43 PM, Jonathan F wrote: > I know the EG have considered pretty much everything (I?ve followed > the forums all along) but I haven?t seen this discussed yet, so here goes. > > It seems to me the novel concept of implicit for constructors may not > be necessary, as (IMHO) it depends on the story that?s told about how > fields are initialised. Can?t we say the following instead? For a > value class Point, if it has the implicit constructor it also has a > quasi-static field Point.default (I think that?s still the plan). I > want to say this constructor only has 2 things special about it: it > has no body, and it?s used to initialise Point.default when the class > is initialised. Later, fields and array elements of type Point! are > (notionally) initialised to Point.default, not by calling the > constructor, which seems more like reality and like how other fields > are set to null/0. So there?s nothing implicit about the constructor > itself; and the creation of Point.default isn?t magic either, it?s > like the creation of Point.class or enum constants - ?just one of > those things? that happens when a class initialises. > > If that makes sense, I?d go for the previous constructor syntax public > default Point(). Meaning simply this is the constructor used for > Point.default. Or maybe even public default-0 Point() if it?s called > the ?zero instance?. > > As a separate but related idea: assuming the language has > Point.default, it could be better to have syntax like Point!.default > for the object, with Point.default meaning null (as these value > classes will effectively have 2 defaults), and in general T.default > for any type, e.g. double.default. That seems useful in itself, but it > would also allow the simple explanation that a field or array element > of _any_ type T gets initialised to T.default. > > Even if that?s not workable, it would make me uncomfortable if > Point.default is non-null when a Point field actually defaults to null! > > best wishes, > JF > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anhmdq at gmail.com Sat Jan 20 21:03:33 2024 From: anhmdq at gmail.com (=?UTF-8?Q?Qu=C3=A2n_Anh_Mai?=) Date: Sun, 21 Jan 2024 05:03:33 +0800 Subject: Value types design decisions In-Reply-To: References: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> Message-ID: Hi, I think the key idea is that the zero instance of an implicitly constructible class is an initialized object. In the future, when ! is extended to other classes then the object must be explicitly instantiated to be able to be assigned to a ! type, hence the name implicit constructor. Regards, Quan Anh On Sun, 21 Jan 2024 at 04:34, David Alayachew wrote: > Hello Brian, > > Just wanted to ask about this point below. > > > These two examples (conveniently, from the same package) > > tell us a truth about modeling data with value objects: > > some classes have reasonable defaults (and we can exploit > > this to optimize their treatment, and can freely let > > people use uninitialized variables, as we do for > > primitives today), and some do not (and hence we need > > either to engage nullability, or work very hard to ensure > > that an object cannot possibly be used uninitialized. > > Oh woah, what does it look like to prevent an object to be used > uninitialized? We don't have any compiler-enforced ways of doing this, > right? You are talking about just manually code reviewing each use of the > class to try and hope to catch any misuse? > > And if you mean manual review, is that really something that would be > worth the effort? Aside from the most trivial situations, I can't think of > an example. > > Thank you for your time and help! > David Alayachew > > On Sat, Jan 20, 2024 at 2:23?PM Brian Goetz > wrote: > >> Forgot to address this part. >> >> On 1/20/2024 2:00 AM, Smith Wilson wrote: >> > It could solve problems with such classes like LocalDate which require >> > more special zeroInstance (1970-1-1) than value with pure defaults. >> >> The key thing to realize about LocalDate is that while it is a good >> candidate for being a value type, there is *literally no* good default >> value. Jan 1 1970 is not a good default value, as we've all seen by >> getting emails that tell us our subscriptions are going to expire 54 >> years ago. No other specific date is a good default either. This is a >> statement about the _domain_; there is no good default. (And when an >> abstraction does not have a good default, null is your friend.) >> >> Other classes have good defaults; Duration is such an example. >> >> These two examples (conveniently, from the same package) tell us a truth >> about modeling data with value objects: some classes have reasonable >> defaults (and we can exploit this to optimize their treatment, and can >> freely let people use uninitialized variables, as we do for primitives >> today), and some do not (and hence we need either to engage nullability, >> or work very hard to ensure that an object cannot possibly be used >> uninitialized. >> >> The reason that "implicit constructor" is a thing is that, even after >> we've identified that a class is a value class, we still have to >> identify something else: whether it is implicitly constructible or >> requires explicit construction. (There's other ways to frame it, of >> course, but the key is that this has to be in the programming model.) >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidalayachew at gmail.com Sat Jan 20 21:55:10 2024 From: davidalayachew at gmail.com (David Alayachew) Date: Sat, 20 Jan 2024 16:55:10 -0500 Subject: Value types design decisions In-Reply-To: References: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> Message-ID: Hello Brian, Thank you for your response! > LocalDate![] -- We can flatten these only if we have 100% > ironclad proof that every element has been initialized to > a valid value prior to publication. Allowing the zero to > escape undermines a VM integrity promise, and this can > never happen. > > As to "how would this work", design discussions are > ongoing, stay tuned. I'll stay tuned. This one really intrigues me because I have no idea how on earth that could happen. Thank you for your time and help! David Alayachew On Sat, Jan 20, 2024 at 3:17?PM Brian Goetz wrote: > This interacts with nullity control. If a variable is nullable, its > default value is null -- end of story. But nullability is often (not > always) an impediment to flattening. So if you want a flat field / array, > you probably are using a non-nullable field/component type. Of the three > kinds (identity (String), implicitly constructible value (Integer), > non-implicitly-constructible value (LocalDate)), we get three different > stories about flattening and failure modes: > > String![] -- these will never be flattened, so the default value of > elements is still null. We have some design choices about whether to make > "good faith efforts" for preventing these from being published before being > fully ininitialized, but if one does escape, the downside is not terrible > -- an unexpected NPE, but no loss of integrity. > > Integer![] -- these can get flattened routinely, and it's perfectly fine > to publish an array whose elements are uninitialized, because Integer is > implicitly constructible, which means the default value is just fine. > > LocalDate![] -- We can flatten these only if we have 100% ironclad proof > that every element has been initialized to a valid value prior to > publication. Allowing the zero to escape undermines a VM integrity > promise, and this can never happen. > > As to "how would this work", design discussions are ongoing, stay tuned. > > > On 1/20/2024 2:41 PM, David Alayachew wrote: > > Hello Brian, > > Just wanted to ask about this point below. > > > These two examples (conveniently, from the same package) > > tell us a truth about modeling data with value objects: > > some classes have reasonable defaults (and we can exploit > > this to optimize their treatment, and can freely let > > people use uninitialized variables, as we do for > > primitives today), and some do not (and hence we need > > either to engage nullability, or work very hard to ensure > > that an object cannot possibly be used uninitialized. > > Oh woah, what does it look like to prevent an object to be used > uninitialized? We don't have any compiler-enforced ways of doing this, > right? You are talking about just manually code reviewing each use of the > class to try and hope to catch any misuse? > > And if you mean manual review, is that really something that would be > worth the effort? Aside from the most trivial situations, I can't think of > an example. > > Thank you for your time and help! > David Alayachew > > On Sat, Jan 20, 2024 at 2:23?PM Brian Goetz > wrote: > >> Forgot to address this part. >> >> On 1/20/2024 2:00 AM, Smith Wilson wrote: >> > It could solve problems with such classes like LocalDate which require >> > more special zeroInstance (1970-1-1) than value with pure defaults. >> >> The key thing to realize about LocalDate is that while it is a good >> candidate for being a value type, there is *literally no* good default >> value. Jan 1 1970 is not a good default value, as we've all seen by >> getting emails that tell us our subscriptions are going to expire 54 >> years ago. No other specific date is a good default either. This is a >> statement about the _domain_; there is no good default. (And when an >> abstraction does not have a good default, null is your friend.) >> >> Other classes have good defaults; Duration is such an example. >> >> These two examples (conveniently, from the same package) tell us a truth >> about modeling data with value objects: some classes have reasonable >> defaults (and we can exploit this to optimize their treatment, and can >> freely let people use uninitialized variables, as we do for primitives >> today), and some do not (and hence we need either to engage nullability, >> or work very hard to ensure that an object cannot possibly be used >> uninitialized. >> >> The reason that "implicit constructor" is a thing is that, even after >> we've identified that a class is a value class, we still have to >> identify something else: whether it is implicitly constructible or >> requires explicit construction. (There's other ways to frame it, of >> course, but the key is that this has to be in the programming model.) >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.r.rose at oracle.com Sat Jan 20 22:01:21 2024 From: john.r.rose at oracle.com (John Rose) Date: Sat, 20 Jan 2024 14:01:21 -0800 Subject: The idea of implicit vs default In-Reply-To: <7ddbd79b-ba5f-4cbf-a51f-40b9f6a7f9a0@oracle.com> References: <7ddbd79b-ba5f-4cbf-a51f-40b9f6a7f9a0@oracle.com> Message-ID: Thanks, Brian. Let me add some more thoughts about this, because it really isn?t a case of ?you guys missed an obvious move? or ?you don?t want us programmers to have good tools?. The VM really, really likes its zeroes. This is because zero is the initial state of any scalar. Null is a kind of zero, from this point of view. Low-level data structure always needs to bootstrap from something definite, and Java bootstraps from a very small menu of zeroes (and null and false). We could imagine a software stack where zeroes are not privileged. In fact, at the source level, the special role of zeroes can be suppressed almost completely, except for array creation. But it?s there, every time you start creating an object or array. If we try to take the idea of de-privileging zeroes and push it down into the VM, bad things happen. The VM physics are not friendly; you will see poorer performance if you try to dictate user-defined initial states. This is what Brian is meaning when he talks about ?paint rollers?. Zero-colored paint the standard paint in the Java stack, and you get a volume discount on it. On the other hand, it might seem to be just a ?matter of software?, arbitrarily adjustable, to allow programmers to create user-defined initial states. To support a whole spectrum of ?paint colors?, one for each job. But for Java it is not a mere ?matter of software? and that is why we appeal to the (metaphor of) physics of computation. So, forget for a second about values, and try the mental exercise of redesigning the Java language (as of today), and its translation strategy to a VM, and the VM itself, so that all initial states are user controllable. You will need a few months to get a good start on this, and you will find it touches many parts of the JLS and JVMS. Don?t forget the Java Memory Model, and installing the correct happens-before states for a reference that initializes to point to another object. In the end, you will find you don?t want to finish this exercise. We?ve done enough of it, ourselves, in the years we?ve been working on Valhalla, to know we won?t enjoy it. So we don?t want to do it in Valhalla either, even ?just for value objects?. One place where things would go wrong is array creation performance. Recall that null is privileged, so that when you have an array that is created with reference fields they are set to null. (And if it has flattened value objects any and all of those reference fields are set to null, in every array element.) That works so well and so simply it is easy to miss what just happened: The GC, with all its complex invariants about what goes where, starts ?thinking about? an array element just after the zeroes are stored, and it ?knows there?s nothing there?. When you store a non-null reference, the GC has to ?start thinking some more? about that variable. It might even update a transactional log for that store operation. Now imagine a VM feature which made arrays initialized to some non-zero/non-null pattern. What must happen? Well, for many GC?s (those with store barriers) the GC must register the value of each original reference stored in that array. Even if you are going to overwrite it imediately, the micro-states of the array (while it is under construction, while it has a mix of default values and really useful values) must be correctly managed. (Because the GC might have to collect storage while the array is partially populated.) In the end, setting up an array to user-defined default values turns into AN EXTRA PASS OVER EVERY ARRAY. (Put another way, it is in effect an assignment operation to every array element, not present in the code, but costly.) This extra complexity in VM physics turns into costs at the level of hardware (memory fabric) physics. (You might try something ?lazier?, like an array fill pointer, but that has its own costs, and bug tail.) In the end, after all the heroics are done, what would we get in return? People who dislike zeroes could use non-zero values in their value types. Not a real prize for any self-respecting hero; not a good tradeoff. As others have already pointed out, you, the value class author, can always find a way to cope with those initial zeroes. If you really really are stuck on 42, then write your field accessor to add or xor with 42. If you really like some particular non-null reference, adjust the field accessor accordingly. But don?t ask the VM to do these trivial chores for you, because it will make the rest of the system slower and/or more complex. For another system which did it the other way, please look at how C++ object constructors interact with C++ array creation. It is awkward, hard to understand, bug-prone, and expensive. We don?t choose to adopt those costs into the Java language or VM. On the other hand, there will be frequent use cases where the user wants to place a non-default value as the initial state of every element of some new array. That?s part of the programmer?s toolkit, after all. That shouldn?t be done at the level of the VM or language, obviously, since different use cases will choose different initial values. So this is a job for library APIs not the language or VM. (Maybe the language should provide sugar; that will come later, maybe.) And, as long as we are talking about use cases for array construction, sometimes the initial array element is a FUNCTION of the index. Obviously not a job for the VM or language (unless there?s sugar); this is a library job. So we are not saying your flat value arrays must always have that one globally defined zero-rich value. We are saying that they have a privileged position in the language and VM, but the real action will always be in the library APIs. Are arrays the only reason we are ?sticking with zero?? They certainly make the problem very notable, but any large collection of objects will also have similar extra costs, analogous to the GC-related costs I pointed out above, if their initialization is not allowed to be rich in zeros and (especially) nulls. Surely many of you on this mailing list have had moments when, as a Java programmer, you weighed the cost of leaving a field uninitialized (and working with the resulting zero as the first state) vs. initializing it in the constructor to a value that made more logical sense. Sometimes that choice makes for better performance if you don?t execute that first assignment. Now imagine that a value class you wish to use has a non-zero default which makes variables of that type slightly slower to initialize (because of impacts on the GC and maybe others). You wouldn?t thank the value author for this; you might send them an email asking them to push your desired embrace of zeroes into their class as well, so your class instances (in their flat value fields) will set up faster. Ultimately, our choice to support only zero-rich default/implicit/initial values is a push like that, once and for all, everywhere. It helps all programmers by helping the VM focus its optimizations on globally known values. Only the one paint color that has the bulk discount. And there can only be one that gets the full discount, since remembering one state requires zero (lg 1) bits. I hope this helps. I know it?s complex and subtle. We?ve been wrestling with this particular issue for many years. On 20 Jan 2024, at 12:48, Brian Goetz wrote: > This is a nice idea, and it has come around several times in the design discussions.? From a the-system-stops-at-the-source-code perspective, it seems fine; you declare a constructor to make "the default value", and arrange that this constructor is only ever called once (during class preparation, most likely), to initialize the "stamp".? Then you use the stamp to stamp out default values.? Easy, right? From livedinsquares at gmail.com Sat Jan 20 22:14:42 2024 From: livedinsquares at gmail.com (Jonathan F) Date: Sat, 20 Jan 2024 22:14:42 +0000 Subject: The idea of implicit vs default In-Reply-To: <7ddbd79b-ba5f-4cbf-a51f-40b9f6a7f9a0@oracle.com> References: <7ddbd79b-ba5f-4cbf-a51f-40b9f6a7f9a0@oracle.com> Message-ID: Hi Brian - thanks for the clear response. This is a nice idea, and it has come around several times in the design discussions. From a the-system-stops-at-the-source-code perspective, it seems fine; you declare a constructor to make "the default value", and arrange that this constructor is only ever called once (during class preparation, most likely), to initialize the "stamp". Then you use the stamp to stamp out default values. Easy, right? Understood, and I remember the debates along these lines. But what I?m hoping to suggest - maybe it?s simplistic - is to in no way change how the language/VM implement this feature, but ?just' describe it differently without introducing a new concept (?implicit') to the poor developer. Eliminating a keyword and an idea. So my (probably simplistic) reaction is: how visible is the mechanism for initialising a field or array (to most developers)? Couldn't the language spec cover its tracks by saying notionally variables are initialised from Point.default, but the VM has discretion to use a different mechanism, and the spec would specify integrity invariants etc. that are needed (and in practice couldn?t be achieved by literally stamping with Point.default). So the language syntax, and the ?basic idea? in the developer?s head, are simple but with footnotes for experts. Hand-wavy I know, but maybe a spec description like that is possible, or maybe it would just be distasteful?! best wishes, Jonathan Finn On 20 January 2024 at 20:48:21, Brian Goetz (brian.goetz at oracle.com) wrote: This is a nice idea, and it has come around several times in the design discussions. From a the-system-stops-at-the-source-code perspective, it seems fine; you declare a constructor to make "the default value", and arrange that this constructor is only ever called once (during class preparation, most likely), to initialize the "stamp". Then you use the stamp to stamp out default values. Easy, right? The only problem with this is that it is on a collision course with VM physics. Many of the integrity invariants, as well as many optimizations, are built on the assumption that initialization of a new object or array is always done with the zero-colored paint roller. If you want it done by stamps, that means not only do you have a stamp for Point, but you also need a bigger stamp for everything that has a Point flattened into it, and you have much more data motion for every object initialization. And initializing arrays means N individual stampings, rather than just a broad stroke with the zero roller -- even more data motion. Plus, there is more room for races, and therefore more work to prevent such races, when initializing by copying from an exemplar than initializing with bzero. And given that performance is a primary motivation for value classes, adding cost to initializing values would really have to pay off big to carry its weight. But the problem doesn't stop there, because some value classes simply have no good default, whether it is a physical zero or not. LocalDate is an example of such a class. So after considering this multiple times, our conclusion is that while maybe it could be done, but even if so, it only solves part of the problem, and the cost would be unreasonably far out of line with the benefit. On 1/20/2024 2:43 PM, Jonathan F wrote: I know the EG have considered pretty much everything (I?ve followed the forums all along) but I haven?t seen this discussed yet, so here goes. It seems to me the novel concept of implicit for constructors may not be necessary, as (IMHO) it depends on the story that?s told about how fields are initialised. Can?t we say the following instead? For a value class Point, if it has the implicit constructor it also has a quasi-static field Point.default (I think that?s still the plan). I want to say this constructor only has 2 things special about it: it has no body, and it?s used to initialise Point.default when the class is initialised. Later, fields and array elements of type Point! are (notionally) initialised to Point.default, not by calling the constructor, which seems more like reality and like how other fields are set to null/0. So there?s nothing implicit about the constructor itself; and the creation of Point.default isn?t magic either, it?s like the creation of Point.class or enum constants - ?just one of those things? that happens when a class initialises. If that makes sense, I?d go for the previous constructor syntax public default Point(). Meaning simply this is the constructor used for Point.default. Or maybe even public default-0 Point() if it?s called the ?zero instance?. As a separate but related idea: assuming the language has Point.default, it could be better to have syntax like Point!.default for the object, with Point.default meaning null (as these value classes will effectively have 2 defaults), and in general T.default for any type, e.g. double.default. That seems useful in itself, but it would also allow the simple explanation that a field or array element of _any_ type T gets initialised to T.default. Even if that?s not workable, it would make me uncomfortable if Point.default is non-null when a Point field actually defaults to null! best wishes, JF -------------- next part -------------- An HTML attachment was scrubbed... URL: From livedinsquares at gmail.com Sat Jan 20 22:34:26 2024 From: livedinsquares at gmail.com (Jonathan F) Date: Sat, 20 Jan 2024 22:34:26 +0000 Subject: The idea of implicit vs default In-Reply-To: References: <7ddbd79b-ba5f-4cbf-a51f-40b9f6a7f9a0@oracle.com> Message-ID: Hi John - thanks for the lucid explanation of why zero is special to the VM for numerous optimisations etc. (I know you?ve written about this before.) But just to be clear about my original point, I?m not advocating a non-zero implicit constructor. I like zero 8^) . See my reply to Brian: I?m just hoping we can have (or get away with?) a simpler way of describing value object creation, to avoid the idea of ?implicit?. I?d love a fairly tidy view of the world for us non-expert developers, even if it?s an illusion! best wishes Jonathan Finn On 20 January 2024 at 22:01:27, John Rose (john.r.rose at oracle.com) wrote: Thanks, Brian. Let me add some more thoughts about this, because it really isn?t a case of ?you guys missed an obvious move? or ?you don?t want us programmers to have good tools?. The VM really, really likes its zeroes. This is because zero is the initial state of any scalar. Null is a kind of zero, from this point of view. Low-level data structure always needs to bootstrap from something definite, and Java bootstraps from a very small menu of zeroes (and null and false). We could imagine a software stack where zeroes are not privileged. In fact, at the source level, the special role of zeroes can be suppressed almost completely, except for array creation. But it?s there, every time you start creating an object or array. If we try to take the idea of de-privileging zeroes and push it down into the VM, bad things happen. The VM physics are not friendly; you will see poorer performance if you try to dictate user-defined initial states. This is what Brian is meaning when he talks about ?paint rollers?. Zero-colored paint the standard paint in the Java stack, and you get a volume discount on it. On the other hand, it might seem to be just a ?matter of software?, arbitrarily adjustable, to allow programmers to create user-defined initial states. To support a whole spectrum of ?paint colors?, one for each job. But for Java it is not a mere ?matter of software? and that is why we appeal to the (metaphor of) physics of computation. So, forget for a second about values, and try the mental exercise of redesigning the Java language (as of today), and its translation strategy to a VM, and the VM itself, so that all initial states are user controllable. You will need a few months to get a good start on this, and you will find it touches many parts of the JLS and JVMS. Don?t forget the Java Memory Model, and installing the correct happens-before states for a reference that initializes to point to another object. In the end, you will find you don?t want to finish this exercise. We?ve done enough of it, ourselves, in the years we?ve been working on Valhalla, to know we won?t enjoy it. So we don?t want to do it in Valhalla either, even ?just for value objects?. One place where things would go wrong is array creation performance. Recall that null is privileged, so that when you have an array that is created with reference fields they are set to null. (And if it has flattened value objects any and all of those reference fields are set to null, in every array element.) That works so well and so simply it is easy to miss what just happened: The GC, with all its complex invariants about what goes where, starts ?thinking about? an array element just after the zeroes are stored, and it ?knows there?s nothing there?. When you store a non-null reference, the GC has to ?start thinking some more? about that variable. It might even update a transactional log for that store operation. Now imagine a VM feature which made arrays initialized to some non-zero/non-null pattern. What must happen? Well, for many GC?s (those with store barriers) the GC must register the value of each original reference stored in that array. Even if you are going to overwrite it imediately, the micro-states of the array (while it is under construction, while it has a mix of default values and really useful values) must be correctly managed. (Because the GC might have to collect storage while the array is partially populated.) In the end, setting up an array to user-defined default values turns into AN EXTRA PASS OVER EVERY ARRAY. (Put another way, it is in effect an assignment operation to every array element, not present in the code, but costly.) This extra complexity in VM physics turns into costs at the level of hardware (memory fabric) physics. (You might try something ?lazier?, like an array fill pointer, but that has its own costs, and bug tail.) In the end, after all the heroics are done, what would we get in return? People who dislike zeroes could use non-zero values in their value types. Not a real prize for any self-respecting hero; not a good tradeoff. As others have already pointed out, you, the value class author, can always find a way to cope with those initial zeroes. If you really really are stuck on 42, then write your field accessor to add or xor with 42. If you really like some particular non-null reference, adjust the field accessor accordingly. But don?t ask the VM to do these trivial chores for you, because it will make the rest of the system slower and/or more complex. For another system which did it the other way, please look at how C++ object constructors interact with C++ array creation. It is awkward, hard to understand, bug-prone, and expensive. We don?t choose to adopt those costs into the Java language or VM. On the other hand, there will be frequent use cases where the user wants to place a non-default value as the initial state of every element of some new array. That?s part of the programmer?s toolkit, after all. That shouldn?t be done at the level of the VM or language, obviously, since different use cases will choose different initial values. So this is a job for library APIs not the language or VM. (Maybe the language should provide sugar; that will come later, maybe.) And, as long as we are talking about use cases for array construction, sometimes the initial array element is a FUNCTION of the index. Obviously not a job for the VM or language (unless there?s sugar); this is a library job. So we are not saying your flat value arrays must always have that one globally defined zero-rich value. We are saying that they have a privileged position in the language and VM, but the real action will always be in the library APIs. Are arrays the only reason we are ?sticking with zero?? They certainly make the problem very notable, but any large collection of objects will also have similar extra costs, analogous to the GC-related costs I pointed out above, if their initialization is not allowed to be rich in zeros and (especially) nulls. Surely many of you on this mailing list have had moments when, as a Java programmer, you weighed the cost of leaving a field uninitialized (and working with the resulting zero as the first state) vs. initializing it in the constructor to a value that made more logical sense. Sometimes that choice makes for better performance if you don?t execute that first assignment. Now imagine that a value class you wish to use has a non-zero default which makes variables of that type slightly slower to initialize (because of impacts on the GC and maybe others). You wouldn?t thank the value author for this; you might send them an email asking them to push your desired embrace of zeroes into their class as well, so your class instances (in their flat value fields) will set up faster. Ultimately, our choice to support only zero-rich default/implicit/initial values is a push like that, once and for all, everywhere. It helps all programmers by helping the VM focus its optimizations on globally known values. Only the one paint color that has the bulk discount. And there can only be one that gets the full discount, since remembering one state requires zero (lg 1) bits. I hope this helps. I know it?s complex and subtle. We?ve been wrestling with this particular issue for many years. On 20 Jan 2024, at 12:48, Brian Goetz wrote: > This is a nice idea, and it has come around several times in the design discussions. From a the-system-stops-at-the-source-code perspective, it seems fine; you declare a constructor to make "the default value", and arrange that this constructor is only ever called once (during class preparation, most likely), to initialize the "stamp". Then you use the stamp to stamp out default values. Easy, right? -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.r.rose at oracle.com Sat Jan 20 22:45:15 2024 From: john.r.rose at oracle.com (John Rose) Date: Sat, 20 Jan 2024 14:45:15 -0800 Subject: The idea of implicit vs default In-Reply-To: References: Message-ID: <0FC36AD9-2B01-459E-8565-35A105800C3D@oracle.com> On 20 Jan 2024, at 11:43, Jonathan F wrote: > ? > fields and array elements of type Point! are (notionally) initialised to > Point.default, not by calling the constructor, which seems more like > reality and like how other fields are set to null/0. So there?s nothing > implicit about the constructor itself; and the creation of Point.default > isn?t magic either, it?s like the creation of Point.class or enum constants > - ?just one of those things? that happens when a class initialises. > > If that makes sense, I?d go for the previous constructor syntax public > default Point(). Meaning simply this is the constructor used for > Point.default. Or maybe even public default-0 Point() if it?s called the > ?zero instance?. This would not violate VM ?physics? as long as the default Point value truly was always ?full of zeroes only?. But it also wouldn?t be very interesting. OTOH, if we were to allow ?interesting? Point defaults, the cost would be the disruptions I listed in my previous message. The way I like to see it is the implicit constructor is a real constructor, but it is prevented from doing any field assignments, guaranteeing those favorable zero-rich fields. Giving a name to the result of that constructor is possible (as you point out) but it doesn?t seem to buy much. The VM could notionally assign a named value when necessary and/or it could notionally call a no-op constructor when necessary: Is there much of a difference, at the level of user model? It seems more economical to take an existing feature (constructors) and strength-reduce it (nop b/c no body) versus adding a point-feature for naming the constructed default. BTW, initializing a field or array element to a fixed value (zero) really is a distinct operation from assigning to a value. The whole conversation about non-zero defaults amounts to some people wishing for the convenience of implicit assignments (of their selected values) happening invisibly wherever their value appears. But the assignments would cost us, as I hope I demonstrated previously. Java avoids most invisible magic costs like that, when possible, so we can make our chosen invisible magic costs (object deallocation!!) be as cheap as possible. Here?s a shorter statement identifying the invisible magic cost from a proposed default initialization feature: >> A user-defined field default, if not zero or null, must be translated into invisible field assignments at the creation of every object or array instance, wherever such a field is part of the created object or array elements (either directly, or indirectly via a flattened value). > ? > Even if that?s not workable, it would make me uncomfortable if > Point.default is non-null when a Point field actually defaults to null! That?s a good, er, point. There was a time before we embraced separate nullness markers, when we thought nullness ought to be controlled by a (value) class declaration, for all the uses of the value class. In that case, Point.default would reasonably mean ?the default for any Point variable?. We needed to introduced Point.ref (you may recall) to cover other use cases. We like having it flipped it the way it is now, where Point is always a nullable, default-null type, and Point! is where the new behaviors appear. But that makes Point.default read more poorly as a result. From john.r.rose at oracle.com Sat Jan 20 22:46:06 2024 From: john.r.rose at oracle.com (John Rose) Date: Sat, 20 Jan 2024 14:46:06 -0800 Subject: The idea of implicit vs default In-Reply-To: References: <7ddbd79b-ba5f-4cbf-a51f-40b9f6a7f9a0@oracle.com> Message-ID: <7269D9A8-48F1-47DA-B568-85A8B4B9641A@oracle.com> You are welcome! See my message that crossed yours. :-) On 20 Jan 2024, at 14:34, Jonathan F wrote: > Hi John - thanks for the lucid explanation of why zero is special to the VM > for numerous optimisations etc. (I know you?ve written about this before.) > > But just to be clear about my original point? From brian.goetz at oracle.com Sat Jan 20 23:14:37 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Sat, 20 Jan 2024 18:14:37 -0500 Subject: The idea of implicit vs default In-Reply-To: References: <7ddbd79b-ba5f-4cbf-a51f-40b9f6a7f9a0@oracle.com> Message-ID: > Understood, and I remember the debates along these lines. But what I?m > hoping to suggest - maybe it?s simplistic - is to in no way change how > the language/VM implement this feature, but ?just' describe it > differently without introducing a new concept (?implicit') to the poor > developer. Eliminating a keyword and an idea. Indeed, we tried to do this many times, and came up short every time.? That's not to say there isn't a way, but we banged our heads against it good and hard over a long time. If you look at my reply to (I think) David on another mail today, you'll see that there really are three distinct situations (identity, value but not implicitly constructible, value and implicitly constructible) that have different treatments and consequences.? So this "extra concept" will show up somewhere, its just a matter of where we can put it so that it seems least annoying.? We felt that making it a constructor modifier "downleveled" its importance compared to making it a class modifier, but the concept has to be there somewhere, unless we want to give up on a lot of potential optimizations. There's a reason this has taken so long; the problem has layers of subtlety that were difficult to tease apart.? It was almost like a Spanish Inquisition sketch: "The chief impediment to flattening is identity.? Identity and nullity.? The three main impediments are identity, nullity, and initialization safety.? AMONGST our impediments, are such diverse elements as identity, nullity, initialization safety, and atomicity.... Each time we discovered a new one, we tried various moves: denial, lumping, and eventually, acceptance; then began the long slow process of normalization.? We *think* we've now gotten things so that each axis is independent and is defined as a useful semantic even separate from performance, but ... there could still be more turns of this crank. (And, as a perverse incentive, the more clearly we separate the various considerations, the more likely someone is to ask "surely we can get rid of X, no?") -------------- next part -------------- An HTML attachment was scrubbed... URL: From livedinsquares at gmail.com Sat Jan 20 23:24:59 2024 From: livedinsquares at gmail.com (Jonathan F) Date: Sat, 20 Jan 2024 23:24:59 +0000 Subject: The idea of implicit vs default In-Reply-To: <0FC36AD9-2B01-459E-8565-35A105800C3D@oracle.com> References: <0FC36AD9-2B01-459E-8565-35A105800C3D@oracle.com> Message-ID: > The way I like to see it is the implicit constructor is a real constructor,> but it is prevented from doing any field assignments, guaranteeing those> favorable zero-rich fields. Giving a name to the result of that constructor> is possible (as you point out) but it doesn?t seem to buy much.> The VM could notionally assign a named value when necessary and/or> it could notionally call a no-op constructor when necessary: Is there> much of a difference, at the level of user model? Yes there is! ?at least the way I understand it, as follows: Current story: when a field/array element is created, the type's default value (normally null or 0) is stored in it. But for Point!, this value has to be created by (notionally) calling its constructor, which seems so different this process has been given the name ?implicit?. New story: when a field/array element is created, the type's default value is stored in it. That?s it. And ideally, maybe this value could be uniformly called T.default. You?ve got a fair point that _if_ Point.default were introduced just for this purpose then little is gained, but I thought Point.default was likely to happen anyway? And (speculating wildly) there may be a long-term plan to have T.default generally for any type (it feels important for generic code), but I know how cautious the EG has to be with all this. best wishes, Jonathan Finn On 20 January 2024 at 22:45:20, John Rose (john.r.rose at oracle.com) wrote: On 20 Jan 2024, at 11:43, Jonathan F wrote: > ? > fields and array elements of type Point! are (notionally) initialised to > Point.default, not by calling the constructor, which seems more like > reality and like how other fields are set to null/0. So there?s nothing > implicit about the constructor itself; and the creation of Point.default > isn?t magic either, it?s like the creation of Point.class or enum constants > - ?just one of those things? that happens when a class initialises. > > If that makes sense, I?d go for the previous constructor syntax public > default Point(). Meaning simply this is the constructor used for > Point.default. Or maybe even public default-0 Point() if it?s called the > ?zero instance?. This would not violate VM ?physics? as long as the default Point value truly was always ?full of zeroes only?. But it also wouldn?t be very interesting. OTOH, if we were to allow ?interesting? Point defaults, the cost would be the disruptions I listed in my previous message. The way I like to see it is the implicit constructor is a real constructor, but it is prevented from doing any field assignments, guaranteeing those favorable zero-rich fields. Giving a name to the result of that constructor is possible (as you point out) but it doesn?t seem to buy much. The VM could notionally assign a named value when necessary and/or it could notionally call a no-op constructor when necessary: Is there much of a difference, at the level of user model? It seems more economical to take an existing feature (constructors) and strength-reduce it (nop b/c no body) versus adding a point-feature for naming the constructed default. BTW, initializing a field or array element to a fixed value (zero) really is a distinct operation from assigning to a value. The whole conversation about non-zero defaults amounts to some people wishing for the convenience of implicit assignments (of their selected values) happening invisibly wherever their value appears. But the assignments would cost us, as I hope I demonstrated previously. Java avoids most invisible magic costs like that, when possible, so we can make our chosen invisible magic costs (object deallocation!!) be as cheap as possible. Here?s a shorter statement identifying the invisible magic cost from a proposed default initialization feature: >> A user-defined field default, if not zero or null, must be translated into invisible field assignments at the creation of every object or array instance, wherever such a field is part of the created object or array elements (either directly, or indirectly via a flattened value). > ? > Even if that?s not workable, it would make me uncomfortable if > Point.default is non-null when a Point field actually defaults to null! That?s a good, er, point. There was a time before we embraced separate nullness markers, when we thought nullness ought to be controlled by a (value) class declaration, for all the uses of the value class. In that case, Point.default would reasonably mean ?the default for any Point variable?. We needed to introduced Point.ref (you may recall) to cover other use cases. We like having it flipped it the way it is now, where Point is always a nullable, default-null type, and Point! is where the new behaviors appear. But that makes Point.default read more poorly as a result. -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.r.rose at oracle.com Sun Jan 21 01:03:13 2024 From: john.r.rose at oracle.com (John Rose) Date: Sat, 20 Jan 2024 17:03:13 -0800 Subject: The idea of implicit vs default In-Reply-To: References: <0FC36AD9-2B01-459E-8565-35A105800C3D@oracle.com> Message-ID: <1E37A35B-D6BC-4AD1-A232-2B0B932EE439@oracle.com> On 20 Jan 2024, at 15:24, Jonathan F wrote: >> The way I like to see it is the implicit constructor is a real constructor, but it is prevented from doing any field assignments, guaranteeing those favorable zero-rich fields. Giving a name to the result of that constructor> >> is possible (as you point out) but it doesn?t seem to buy much. The VM could notionally assign a named value when necessary and/or it could notionally call a no-op constructor when necessary: Is there much of a difference, at the level of user model? > > Yes there is! ?at least the way I understand it, as follows: > > Current story: when a field/array element is created, the type's default > value (normally null or 0) is stored in it. But for Point!, this value has > to be created by (notionally) calling its constructor, which seems so > different this process has been given the name ?implicit?. > New story: when a field/array element is created, the type's default value > is stored in it. That?s it. But then we define the default value as the thing you get when you run the mandated empty constructor, and I think the two notions collapse together. We don?t (yet) have a notion of ?default value? in the language (other than zero), so there?s no place to ?hang? the result of calling the empty constructor. If the empty constructor could produce side effects or varying values at varying times there would be a logical distinction between the result of calling it yet another time and calling it once and saving the result. But (by design) there is no possible variation or side effect. > And ideally, maybe this value could be > uniformly called T.default. At that point it?s not the default for a value class; you?d want to say T!.default to get the thing we are talking about. > You?ve got a fair point that _if_ Point.default were introduced just for > this purpose then little is gained, but I thought Point.default was likely > to happen anyway? And (speculating wildly) there may be a long-term plan to > have T.default generally for any type (it feels important for generic > code), but I know how cautious the EG has to be with all this. At present T.default feels unlikely to me, even as a tool for generic programming. IF we allow class witnesses in specialized generics, THEN the foundational syntax would probably be T.class (where T is a type variable equipped with a class witness). And then T.default looks like very weak sugar. You can get it from a suitable API, such as T.class.defaultValue(). If we don?t allow class witnesses, we still need some other witness-like mechanism that ?anchors? each specialization. (Have you seen my Parametric VM sketch?) And again an API point does what T.default would do. From liangchenblue at gmail.com Sun Jan 21 05:20:39 2024 From: liangchenblue at gmail.com (-) Date: Sat, 20 Jan 2024 23:20:39 -0600 Subject: Value types design decisions In-Reply-To: References: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> Message-ID: Hi Brian, from your description of LocalDate![] comes a contingent question: Can (non-implicitly constructable) classes now have null-restricted (inlined) fields of types that are not implicitly constructable, given these fields will be initialized by the constructors to valid values? I assume any assignment of invalid values (notably, null) will simply result in exceptions such as NPE. If yes, how will such fields be represented in heap as: 1. identity object's final fields? 2. identity object's mutable fields? 3. non-implicitly constructable value object's final fields? Given these fields can have their proper states visible no later than their owner objects are visible. I think this is a new aspect of inlining that hasn't yet been discussed, but is probably worth discussing. Regards, Chen Liang On Sat, Jan 20, 2024 at 2:17?PM Brian Goetz wrote: > This interacts with nullity control. If a variable is nullable, its > default value is null -- end of story. But nullability is often (not > always) an impediment to flattening. So if you want a flat field / array, > you probably are using a non-nullable field/component type. Of the three > kinds (identity (String), implicitly constructible value (Integer), > non-implicitly-constructible value (LocalDate)), we get three different > stories about flattening and failure modes: > > String![] -- these will never be flattened, so the default value of > elements is still null. We have some design choices about whether to make > "good faith efforts" for preventing these from being published before being > fully ininitialized, but if one does escape, the downside is not terrible > -- an unexpected NPE, but no loss of integrity. > > Integer![] -- these can get flattened routinely, and it's perfectly fine > to publish an array whose elements are uninitialized, because Integer is > implicitly constructible, which means the default value is just fine. > > LocalDate![] -- We can flatten these only if we have 100% ironclad proof > that every element has been initialized to a valid value prior to > publication. Allowing the zero to escape undermines a VM integrity > promise, and this can never happen. > > As to "how would this work", design discussions are ongoing, stay tuned. > > > On 1/20/2024 2:41 PM, David Alayachew wrote: > > Hello Brian, > > Just wanted to ask about this point below. > > > These two examples (conveniently, from the same package) > > tell us a truth about modeling data with value objects: > > some classes have reasonable defaults (and we can exploit > > this to optimize their treatment, and can freely let > > people use uninitialized variables, as we do for > > primitives today), and some do not (and hence we need > > either to engage nullability, or work very hard to ensure > > that an object cannot possibly be used uninitialized. > > Oh woah, what does it look like to prevent an object to be used > uninitialized? We don't have any compiler-enforced ways of doing this, > right? You are talking about just manually code reviewing each use of the > class to try and hope to catch any misuse? > > And if you mean manual review, is that really something that would be > worth the effort? Aside from the most trivial situations, I can't think of > an example. > > Thank you for your time and help! > David Alayachew > > On Sat, Jan 20, 2024 at 2:23?PM Brian Goetz > wrote: > >> Forgot to address this part. >> >> On 1/20/2024 2:00 AM, Smith Wilson wrote: >> > It could solve problems with such classes like LocalDate which require >> > more special zeroInstance (1970-1-1) than value with pure defaults. >> >> The key thing to realize about LocalDate is that while it is a good >> candidate for being a value type, there is *literally no* good default >> value. Jan 1 1970 is not a good default value, as we've all seen by >> getting emails that tell us our subscriptions are going to expire 54 >> years ago. No other specific date is a good default either. This is a >> statement about the _domain_; there is no good default. (And when an >> abstraction does not have a good default, null is your friend.) >> >> Other classes have good defaults; Duration is such an example. >> >> These two examples (conveniently, from the same package) tell us a truth >> about modeling data with value objects: some classes have reasonable >> defaults (and we can exploit this to optimize their treatment, and can >> freely let people use uninitialized variables, as we do for primitives >> today), and some do not (and hence we need either to engage nullability, >> or work very hard to ensure that an object cannot possibly be used >> uninitialized. >> >> The reason that "implicit constructor" is a thing is that, even after >> we've identified that a class is a value class, we still have to >> identify something else: whether it is implicitly constructible or >> requires explicit construction. (There's other ways to frame it, of >> course, but the key is that this has to be in the programming model.) >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.r.rose at oracle.com Sun Jan 21 06:10:08 2024 From: john.r.rose at oracle.com (John Rose) Date: Sat, 20 Jan 2024 22:10:08 -0800 Subject: Value types design decisions In-Reply-To: References: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> Message-ID: <9BE2F49A-2763-478A-B7FC-754BCCDDF2CF@oracle.com> On 20 Jan 2024, at 21:20, - wrote: > Hi Brian, from your description of LocalDate![] comes a contingent question: > Can (non-implicitly constructable) classes now have null-restricted > (inlined) fields of types that are not implicitly constructable, given > these fields will be initialized by the constructors to valid values? I > assume any assignment of invalid values (notably, null) will simply result > in exceptions such as NPE. > > If yes, how will such fields be represented in heap as: > 1. identity object's final fields? > 2. identity object's mutable fields? > 3. non-implicitly constructable value object's final fields? > > Given these fields can have their proper states visible no later than their > owner objects are visible. > > I think this is a new aspect of inlining that hasn't yet been discussed, > but is probably worth discussing. > That is well spotted. As Brian said, ?Stay tuned?. Here is a teaser: https://mail.openjdk.org/pipermail/valhalla-spec-experts/2023-December/002400.html You are right to imply that the implementation options are likely to differ for immutable and mutable fields (and sub-fields of values). As a result of certain conversations last summer at JMVLS, our design for Valhalla got simpler, because value and identity objects use more similar code shapes for their construction. That gave us a better logical framework for thinking about problems and requirements of guaranteed field initialization, the results of which will come out shortly. The basic insight is, if you can prove a field is initialized, you can tolerate an abstraction-breaking initial value. The fun problem is building out the framework for making the necessary proofs, at the right levels in the stack. For mutable fields there is an additional problem, of course, of ensuring that races (on a field of type V! or V) cannot expose values that the author of V has deemed private to V. (Allowing V to opt out of atomicity makes the VM?s job easier.) Once the initialization problem is solved, it is easier to attack the ?safe mutation? problem, with or without an atomicity requirement. All of this stuff will happen ?under the hood?. As Brian says, once the semantics are nailed down, then the optimizations can start to show up. Once the rules are set properly, the games can begin. (I?m talking about the VM implementation games, the kind of games I like best.) I expect to see a long line of VM optimizations arise in the future after Valhalla, to flatten more and more value types, as optimizations get more clever and hardware gets more capable. My concern for this year is to get the semantic rules into the right shape (for both language and VM). That way the best optimizations are possible in the future. Examples of possible future stuff: Research on STM and HTM might be ?mined? for potential candidates to optimize flat, mutable data. There are clever ?cache friendly? and ?lock free? algorithms that might be applicable. I have a handful of pet ideas? One is doubly-periodic arrays, where each array ?card? has N payloads and N null bits (or N big fields and N little fields; N might be 7) but they are allocated column-style (within the card) instead of row-style (which is what we do to day in the Valhalla prototype). The card size is tuned to the cache line size, of course. The indexing arithmetic is surprisingly clean, once you work it out. ? John From forax at univ-mlv.fr Sun Jan 21 11:12:35 2024 From: forax at univ-mlv.fr (Remi Forax) Date: Sun, 21 Jan 2024 12:12:35 +0100 (CET) Subject: Value types design decisions In-Reply-To: <9BE2F49A-2763-478A-B7FC-754BCCDDF2CF@oracle.com> References: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> <9BE2F49A-2763-478A-B7FC-754BCCDDF2CF@oracle.com> Message-ID: <42620334.107851550.1705835555229.JavaMail.zimbra@univ-eiffel.fr> ----- Original Message ----- > From: "John Rose" > To: "-" > Cc: "Brian Goetz" , "David Alayachew" , "Smith Wilson" > , "valhalla-dev" > Sent: Sunday, January 21, 2024 7:10:08 AM > Subject: Re: Value types design decisions > On 20 Jan 2024, at 21:20, - wrote: > >> Hi Brian, from your description of LocalDate![] comes a contingent question: >> Can (non-implicitly constructable) classes now have null-restricted >> (inlined) fields of types that are not implicitly constructable, given >> these fields will be initialized by the constructors to valid values? I >> assume any assignment of invalid values (notably, null) will simply result >> in exceptions such as NPE. >> >> If yes, how will such fields be represented in heap as: >> 1. identity object's final fields? >> 2. identity object's mutable fields? >> 3. non-implicitly constructable value object's final fields? >> >> Given these fields can have their proper states visible no later than their >> owner objects are visible. >> >> I think this is a new aspect of inlining that hasn't yet been discussed, >> but is probably worth discussing. >> > > That is well spotted. As Brian said, ?Stay tuned?. Here is a teaser: > > https://mail.openjdk.org/pipermail/valhalla-spec-experts/2023-December/002400.html > > You are right to imply that the implementation options are likely to differ for > immutable and mutable fields (and sub-fields of values). > > As a result of certain conversations last summer at JMVLS, our design for > Valhalla got simpler, because value and identity objects use more similar code > shapes for their construction. That gave us a better logical framework for > thinking about problems and requirements of guaranteed field initialization, > the results of which will come out shortly. > > The basic insight is, if you can prove a field is initialized, you can tolerate > an abstraction-breaking initial value. The fun problem is building out the > framework for making the necessary proofs, at the right levels in the stack. > > For mutable fields there is an additional problem, of course, of ensuring that > races (on a field of type V! or V) cannot expose values that the author of V > has deemed private to V. (Allowing V to opt out of atomicity makes the VM?s > job easier.) Once the initialization problem is solved, it is easier to attack > the ?safe mutation? problem, with or without an atomicity requirement. All of > this stuff will happen ?under the hood?. > > As Brian says, once the semantics are nailed down, then the optimizations can > start to show up. Once the rules are set properly, the games can begin. (I?m > talking about the VM implementation games, the kind of games I like best.) > > I expect to see a long line of VM optimizations arise in the future after > Valhalla, to flatten more and more value types, as optimizations get more > clever and hardware gets more capable. My concern for this year is to get the > semantic rules into the right shape (for both language and VM). That way the > best optimizations are possible in the future. > > Examples of possible future stuff: Research on STM and HTM might be ?mined? for > potential candidates to optimize flat, mutable data. There are clever ?cache > friendly? and ?lock free? algorithms that might be applicable. I have a > handful of pet ideas? One is doubly-periodic arrays, where each array ?card? > has N payloads and N null bits (or N big fields and N little fields; N might be > 7) but they are allocated column-style (within the card) instead of row-style > (which is what we do to day in the Valhalla prototype). The card size is tuned > to the cache line size, of course. The indexing arithmetic is surprisingly > clean, once you work it out. Another big area we have left on the side is the flattening of sealed (implicit or explicit) hierarchy of value types. > > ? John R?mi From liangchenblue at gmail.com Sun Jan 21 17:26:46 2024 From: liangchenblue at gmail.com (-) Date: Sun, 21 Jan 2024 11:26:46 -0600 Subject: Value types design decisions In-Reply-To: <42620334.107851550.1705835555229.JavaMail.zimbra@univ-eiffel.fr> References: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> <9BE2F49A-2763-478A-B7FC-754BCCDDF2CF@oracle.com> <42620334.107851550.1705835555229.JavaMail.zimbra@univ-eiffel.fr> Message-ID: Hi John, Glad to see you bring up constructive classes again, a proposal I wholeheartedly love! For those who don't know, John published two essay in his code review FTP (also home to other outstanding essays): https://cr.openjdk.org/~jrose/jvm/eager-super-init.html - Safe construction, making final fields trusted, behaving like current @Stable or static/hidden/record finals (works well with leyden's (computed) constants) https://cr.openjdk.org/~jrose/jls/constructive-classes.html - Constructive classes in Java programming language, fixing long-standing discrepancy between normal final fields and inner class enclosing instance field (which is written before super constructor call in bytecode) And it seems you have already considered the possibility of inlining such "rejecting initial value" fields in this slide ( https://cr.openjdk.org/~jrose/values/field-initializations.pdf) even though I am not aware of its exact context. Would you mind sharing more behind-the-scenes details? Question: what do STM and HTM stand for? Unfortunately I am not that familiar with hardware, and "doubly-periodic arrays" is also enigmatic to me (I can't find anything from online searches). I hope you can give me some references on these. Also a few general remarks: 1. The Valhalla research and development has benefitted mainline JDK again. Even though the "constructive classes" was initially an innovation to prevent creating new bytecode sequences that breaks compatibility with the existing new-dup-, it now allows regular Java classes to have more safety with their final fields and removed the long-standing weirdness of having inner classes initialize their outer instance before super call; the same happened back in JDK 11 when NestHost and NestMember were added, which I recall was designed for creating subclasses for generic specialization, but it removed all $accessor$0s generated by javac to access inner class private members, boosting performance and reducing class file sizes. 2. Currently our value class is quite restrictive (no field in abstract values, only null-restricted fields for zero-default value classes, etc) but they are futureproof, as these capabilities can be expanded (algebraic types with sealed, initialization-guard for final fields that reject default values) without hurting backward compatibility. Love this way of enhancement. Chen On Sun, Jan 21, 2024 at 5:13?AM Remi Forax wrote: > ----- Original Message ----- > > From: "John Rose" > > To: "-" > > Cc: "Brian Goetz" , "David Alayachew" < > davidalayachew at gmail.com>, "Smith Wilson" > > , "valhalla-dev" > > Sent: Sunday, January 21, 2024 7:10:08 AM > > Subject: Re: Value types design decisions > > > On 20 Jan 2024, at 21:20, - wrote: > > > >> Hi Brian, from your description of LocalDate![] comes a contingent > question: > >> Can (non-implicitly constructable) classes now have null-restricted > >> (inlined) fields of types that are not implicitly constructable, given > >> these fields will be initialized by the constructors to valid values? I > >> assume any assignment of invalid values (notably, null) will simply > result > >> in exceptions such as NPE. > >> > >> If yes, how will such fields be represented in heap as: > >> 1. identity object's final fields? > >> 2. identity object's mutable fields? > >> 3. non-implicitly constructable value object's final fields? > >> > >> Given these fields can have their proper states visible no later than > their > >> owner objects are visible. > >> > >> I think this is a new aspect of inlining that hasn't yet been discussed, > >> but is probably worth discussing. > >> > > > > That is well spotted. As Brian said, ?Stay tuned?. Here is a teaser: > > > > > https://mail.openjdk.org/pipermail/valhalla-spec-experts/2023-December/002400.html > > > > You are right to imply that the implementation options are likely to > differ for > > immutable and mutable fields (and sub-fields of values). > > > > As a result of certain conversations last summer at JMVLS, our design for > > Valhalla got simpler, because value and identity objects use more > similar code > > shapes for their construction. That gave us a better logical framework > for > > thinking about problems and requirements of guaranteed field > initialization, > > the results of which will come out shortly. > > > > The basic insight is, if you can prove a field is initialized, you can > tolerate > > an abstraction-breaking initial value. The fun problem is building out > the > > framework for making the necessary proofs, at the right levels in the > stack. > > > > For mutable fields there is an additional problem, of course, of > ensuring that > > races (on a field of type V! or V) cannot expose values that the author > of V > > has deemed private to V. (Allowing V to opt out of atomicity makes the > VM?s > > job easier.) Once the initialization problem is solved, it is easier to > attack > > the ?safe mutation? problem, with or without an atomicity requirement. > All of > > this stuff will happen ?under the hood?. > > > > As Brian says, once the semantics are nailed down, then the > optimizations can > > start to show up. Once the rules are set properly, the games can > begin. (I?m > > talking about the VM implementation games, the kind of games I like > best.) > > > > I expect to see a long line of VM optimizations arise in the future after > > Valhalla, to flatten more and more value types, as optimizations get more > > clever and hardware gets more capable. My concern for this year is to > get the > > semantic rules into the right shape (for both language and VM). That > way the > > best optimizations are possible in the future. > > > > Examples of possible future stuff: Research on STM and HTM might be > ?mined? for > > potential candidates to optimize flat, mutable data. There are clever > ?cache > > friendly? and ?lock free? algorithms that might be applicable. I have a > > handful of pet ideas? One is doubly-periodic arrays, where each array > ?card? > > has N payloads and N null bits (or N big fields and N little fields; N > might be > > 7) but they are allocated column-style (within the card) instead of > row-style > > (which is what we do to day in the Valhalla prototype). The card size > is tuned > > to the cache line size, of course. The indexing arithmetic is > surprisingly > > clean, once you work it out. > > > Another big area we have left on the side is the flattening of sealed > (implicit or explicit) hierarchy of value types. > > > > > ? John > > R?mi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.goetz at oracle.com Sun Jan 21 20:52:03 2024 From: brian.goetz at oracle.com (Brian Goetz) Date: Sun, 21 Jan 2024 15:52:03 -0500 Subject: Value types design decisions In-Reply-To: References: <6839a682-645a-403a-8600-6a2668f7fd73@oracle.com> Message-ID: <8235844e-4d10-4de3-91b2-3c603a140679@oracle.com> This is something that we are actively exploring now, but haven't yet settled on a treatment of.? Though null-retricted != inlined (e.g., String![]).? We can always treat LocalDate![] the same as String![], but we'd like to do better.? Here, there are higher requirements to initialize-before-possible-use. An assignment of null to an Anything![] element will result in an NPE/ASE (just as today assigning an Integer to an element of a String[] cast to Object[].)? So invalid-values-on-write are handled. On 1/21/2024 12:20 AM, - wrote: > Hi Brian, from your description of LocalDate![] comes a contingent > question: > Can (non-implicitly constructable) classes now have null-restricted > (inlined) fields of types that are not implicitly constructable, given > these fields will be initialized by the constructors to?valid values? > I assume any assignment of invalid values (notably, null) will simply > result in exceptions such as NPE. > > If yes, how will such fields be represented in heap as: > 1. identity object's final fields? > 2. identity object's mutable fields? > 3. non-implicitly?constructable value object's final fields? > > Given these fields can have their proper states visible no later than > their owner objects are visible. > > I think this is a new aspect of inlining that hasn't yet been > discussed, but is probably worth discussing. > > Regards, > Chen Liang > > On Sat, Jan 20, 2024 at 2:17?PM Brian Goetz > wrote: > > This interacts with nullity control.? If a variable is nullable, > its default value is null -- end of story.? But nullability is > often (not always) an impediment to flattening.? So if you want a > flat field / array, you probably are using a non-nullable > field/component type.? Of the three kinds (identity (String), > implicitly constructible value (Integer), > non-implicitly-constructible value (LocalDate)), we get three > different stories about flattening and failure modes: > > String![] -- these will never be flattened, so the default value > of elements is still null.? We have some design choices about > whether to make "good faith efforts" for preventing these from > being published before being fully ininitialized, but if one does > escape, the downside is not terrible -- an unexpected NPE, but no > loss of integrity. > > Integer![] -- these can get flattened routinely, and it's > perfectly fine to publish an array whose elements are > uninitialized, because Integer is implicitly constructible, which > means the default value is just fine. > > LocalDate![] -- We can flatten these only if we have 100% ironclad > proof that every element has been initialized to a valid value > prior to publication.? Allowing the zero to escape undermines a VM > integrity promise, and this can never happen. > > As to "how would this work", design discussions are ongoing, stay > tuned. > > > On 1/20/2024 2:41 PM, David Alayachew wrote: >> Hello Brian, >> >> Just wanted to ask about this point below. >> >> > These two examples (conveniently, from the same package) >> > tell us a truth about modeling data with value objects: >> > some classes have reasonable defaults (and we can exploit >> > this to optimize their treatment, and can freely let >> > people use uninitialized variables, as we do for >> > primitives today), and some do not (and hence we need >> > either to engage nullability, or work very hard to ensure >> > that an object cannot possibly be used uninitialized. >> >> Oh woah, what does it look like to prevent an object to be used >> uninitialized? We don't have any compiler-enforced ways of doing >> this, right? You are talking about just manually code reviewing >> each use of the class to try and hope to catch any misuse? >> >> And if you mean manual review, is that really something that >> would be worth the effort? Aside from the most trivial >> situations, I can't think of an example. >> >> Thank you for your time and help! >> David Alayachew >> >> On Sat, Jan 20, 2024 at 2:23?PM Brian Goetz >> wrote: >> >> Forgot to address this part. >> >> On 1/20/2024 2:00 AM, Smith Wilson wrote: >> > It could solve problems with such classes like LocalDate >> which require >> > more special zeroInstance (1970-1-1) than value with pure >> defaults. >> >> The key thing to realize about LocalDate is that while it is >> a good >> candidate for being a value type, there is *literally no* >> good default >> value.? Jan 1 1970 is not a good default value, as we've all >> seen by >> getting emails that tell us our subscriptions are going to >> expire 54 >> years ago.? No other specific date is a good default either.? >> This is a >> statement about the _domain_; there is no good default.? (And >> when an >> abstraction does not have a good default, null is your friend.) >> >> Other classes have good defaults; Duration is such an example. >> >> These two examples (conveniently, from the same package) tell >> us a truth >> about modeling data with value objects: some classes have >> reasonable >> defaults (and we can exploit this to optimize their >> treatment, and can >> freely let people use uninitialized variables, as we do for >> primitives >> today), and some do not (and hence we need either to engage >> nullability, >> or work very hard to ensure that an object cannot possibly be >> used >> uninitialized. >> >> The reason that "implicit constructor" is a thing is that, >> even after >> we've identified that a class is a value class, we still have to >> identify something else: whether it is implicitly >> constructible or >> requires explicit construction.? (There's other ways to frame >> it, of >> course, but the key is that this has to be in the programming >> model.) >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pedro.lamarao at prodist.com.br Mon Jan 22 13:38:37 2024 From: pedro.lamarao at prodist.com.br (=?UTF-8?Q?Pedro_Lamar=C3=A3o?=) Date: Mon, 22 Jan 2024 10:38:37 -0300 Subject: The idea of implicit vs default In-Reply-To: References: <7ddbd79b-ba5f-4cbf-a51f-40b9f6a7f9a0@oracle.com> Message-ID: Em s?b., 20 de jan. de 2024 ?s 19:02, John Rose escreveu: > For another system which did it the other way, please look at how C++ > object constructors interact with C++ array creation. It is awkward, hard > to understand, bug-prone, and expensive. We don?t choose to adopt those > costs into the Java language or VM. > IIRC, one of the reasons C++ introduced "default constructors" was to restore the optimized initialization of default initialized arrays for that object. In C++, the presence of any explicitly defined constructor disables the implicitly defined zero-argument constructor. But if one reintroduces the zero-argument constructor explicitly, it is not guaranteed to have the effect of "default initialization", as the body of that constructor is free to do anything. Marking it as = default invokes the "default initialized fields" definition, restoring the optimization. To my intuition, this is generally the same as the "implicit constructor" being proposed by Valhalla. Some have argued that this is difficult to understand, but AFAIK, this is not a particularly heavy pedagogical burden on C++. -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From vromero at openjdk.org Tue Jan 23 22:22:38 2024 From: vromero at openjdk.org (Vicente Romero) Date: Tue, 23 Jan 2024 22:22:38 GMT Subject: [lworld] RFR: 8324052: [lworld] remove experimental code from lworld Message-ID: removing previous developments from lworld ------------- Commit messages: - 8324052: [lworld] remove experimental code from lworld Changes: https://git.openjdk.org/valhalla/pull/976/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=976&range=00 Issue: https://bugs.openjdk.org/browse/JDK-8324052 Stats: 3755 lines in 232 files changed: 855 ins; 2403 del; 497 mod Patch: https://git.openjdk.org/valhalla/pull/976.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/976/head:pull/976 PR: https://git.openjdk.org/valhalla/pull/976 From vromero at openjdk.org Wed Jan 24 13:09:55 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 24 Jan 2024 13:09:55 GMT Subject: Integrated: 8324052: [lworld] remove experimental code from lworld In-Reply-To: References: Message-ID: <547RbjR7aYhVGOlCzrmuBCI6LvIi6dp8ZtIzt2JYipY=.6feb7e6a-177c-4183-96cd-3a870ae059ac@github.com> On Tue, 23 Jan 2024 22:15:53 GMT, Vicente Romero wrote: > removing previous developments from lworld This pull request has now been integrated. Changeset: a784b8b0 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/a784b8b00f5e6c559493cceb734331edb0a4acaa Stats: 3755 lines in 232 files changed: 855 ins; 2403 del; 497 mod 8324052: [lworld] remove experimental code from lworld ------------- PR: https://git.openjdk.org/valhalla/pull/976 From vromero at openjdk.org Fri Jan 26 07:19:06 2024 From: vromero at openjdk.org (Vicente Romero) Date: Fri, 26 Jan 2024 07:19:06 GMT Subject: RFR: 8324643: [lworld] remove ACC_VALUE flag and the 'identity' modifier Message-ID: removing flag ACC_VALUE and identity modifier and related refactorings ------------- Commit messages: - 8324643: [lworld] remove ACC_VALUE flag and the 'identity' modifier Changes: https://git.openjdk.org/valhalla/pull/977/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=977&range=00 Issue: https://bugs.openjdk.org/browse/JDK-8324643 Stats: 687 lines in 41 files changed: 21 ins; 481 del; 185 mod Patch: https://git.openjdk.org/valhalla/pull/977.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/977/head:pull/977 PR: https://git.openjdk.org/valhalla/pull/977 From vromero at openjdk.org Fri Jan 26 07:32:56 2024 From: vromero at openjdk.org (Vicente Romero) Date: Fri, 26 Jan 2024 07:32:56 GMT Subject: RFR: 8324643: [lworld] remove ACC_VALUE flag and the 'identity' modifier [v2] In-Reply-To: References: Message-ID: > removing flag ACC_VALUE and identity modifier and related refactorings Vicente Romero has updated the pull request incrementally with one additional commit since the last revision: minor fix ------------- Changes: - all: https://git.openjdk.org/valhalla/pull/977/files - new: https://git.openjdk.org/valhalla/pull/977/files/abd7e13f..7182250a Webrevs: - full: https://webrevs.openjdk.org/?repo=valhalla&pr=977&range=01 - incr: https://webrevs.openjdk.org/?repo=valhalla&pr=977&range=00-01 Stats: 1 line in 1 file changed: 0 ins; 0 del; 1 mod Patch: https://git.openjdk.org/valhalla/pull/977.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/977/head:pull/977 PR: https://git.openjdk.org/valhalla/pull/977 From vromero at openjdk.org Fri Jan 26 18:33:29 2024 From: vromero at openjdk.org (Vicente Romero) Date: Fri, 26 Jan 2024 18:33:29 GMT Subject: RFR: 8324643: [lworld] remove ACC_VALUE flag and the 'identity' modifier [v3] In-Reply-To: References: Message-ID: > removing flag ACC_VALUE and identity modifier and related refactorings Vicente Romero has updated the pull request incrementally with one additional commit since the last revision: additional changes ------------- Changes: - all: https://git.openjdk.org/valhalla/pull/977/files - new: https://git.openjdk.org/valhalla/pull/977/files/7182250a..8a09da9b Webrevs: - full: https://webrevs.openjdk.org/?repo=valhalla&pr=977&range=02 - incr: https://webrevs.openjdk.org/?repo=valhalla&pr=977&range=01-02 Stats: 117 lines in 3 files changed: 1 ins; 0 del; 116 mod Patch: https://git.openjdk.org/valhalla/pull/977.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/977/head:pull/977 PR: https://git.openjdk.org/valhalla/pull/977 From vromero at openjdk.org Fri Jan 26 19:38:49 2024 From: vromero at openjdk.org (Vicente Romero) Date: Fri, 26 Jan 2024 19:38:49 GMT Subject: Integrated: 8324643: [lworld] remove ACC_VALUE flag and the 'identity' modifier In-Reply-To: References: Message-ID: On Fri, 26 Jan 2024 07:13:01 GMT, Vicente Romero wrote: > removing flag ACC_VALUE and identity modifier and related refactorings This pull request has now been integrated. Changeset: a15bd856 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/a15bd856f9f4c8c7271c5c360d57172a849eac9c Stats: 756 lines in 42 files changed: 22 ins; 481 del; 253 mod 8324643: [lworld] remove ACC_VALUE flag and the 'identity' modifier ------------- PR: https://git.openjdk.org/valhalla/pull/977 From vromero at openjdk.org Fri Jan 26 21:16:05 2024 From: vromero at openjdk.org (Vicente Romero) Date: Fri, 26 Jan 2024 21:16:05 GMT Subject: Integrated: 8324784: [lworld] remove constant propagation code and tests Message-ID: removing experimental code related to constant propagation of instance fields in primitive classes ------------- Commit messages: - 8324784: [lworld] remove constant propagation code and tests Changes: https://git.openjdk.org/valhalla/pull/978/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=978&range=00 Issue: https://bugs.openjdk.org/browse/JDK-8324784 Stats: 198 lines in 8 files changed: 0 ins; 195 del; 3 mod Patch: https://git.openjdk.org/valhalla/pull/978.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/978/head:pull/978 PR: https://git.openjdk.org/valhalla/pull/978 From vromero at openjdk.org Fri Jan 26 21:16:05 2024 From: vromero at openjdk.org (Vicente Romero) Date: Fri, 26 Jan 2024 21:16:05 GMT Subject: Integrated: 8324784: [lworld] remove constant propagation code and tests In-Reply-To: References: Message-ID: On Fri, 26 Jan 2024 21:10:42 GMT, Vicente Romero wrote: > removing experimental code related to constant propagation of instance fields in primitive classes This pull request has now been integrated. Changeset: dbb07388 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/dbb073880595884dde6917ea32c792fcc998d80d Stats: 198 lines in 8 files changed: 0 ins; 195 del; 3 mod 8324784: [lworld] remove constant propagation code and tests ------------- PR: https://git.openjdk.org/valhalla/pull/978 From mchung at openjdk.org Sat Jan 27 00:29:01 2024 From: mchung at openjdk.org (Mandy Chung) Date: Sat, 27 Jan 2024 00:29:01 GMT Subject: [lworld] RFR: 8324792: [lworld] remove VarHandle test cases for primitive value class Message-ID: <1lggCZ26cnRWcil8E4ol6XLUIWCYS0Qhj9jOhIX1GDc=.a285f588-2b01-4389-9afa-6dc405ec07d2@github.com> Clean up test/jdk/java/lang/invoke/VarHandles tests and remove tests for primitive value classes. Since `.class` is adequate to refer to the type, the template files are now updated to match the mainline with only small change for valhalla. ------------- Commit messages: - update copyright header - 8324792: [lworld] remove VarHandle test cases for primitive value class Changes: https://git.openjdk.org/valhalla/pull/979/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=979&range=00 Issue: https://bugs.openjdk.org/browse/JDK-8324792 Stats: 6767 lines in 40 files changed: 23 ins; 4416 del; 2328 mod Patch: https://git.openjdk.org/valhalla/pull/979.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/979/head:pull/979 PR: https://git.openjdk.org/valhalla/pull/979 From rriggs at openjdk.org Mon Jan 29 16:24:05 2024 From: rriggs at openjdk.org (Roger Riggs) Date: Mon, 29 Jan 2024 16:24:05 GMT Subject: [lworld] RFR: 8324792: [lworld] remove VarHandle test cases for primitive value class In-Reply-To: <1lggCZ26cnRWcil8E4ol6XLUIWCYS0Qhj9jOhIX1GDc=.a285f588-2b01-4389-9afa-6dc405ec07d2@github.com> References: <1lggCZ26cnRWcil8E4ol6XLUIWCYS0Qhj9jOhIX1GDc=.a285f588-2b01-4389-9afa-6dc405ec07d2@github.com> Message-ID: On Sat, 27 Jan 2024 00:23:35 GMT, Mandy Chung wrote: > Clean up test/jdk/java/lang/invoke/VarHandles tests and remove tests for primitive value classes. Since `.class` is adequate to refer to the type, the template files are now updated to match the mainline with only small change for valhalla. lgtm test/jdk/java/lang/invoke/VarHandles/X-VarHandleTestAccess.java.template line 495: > 493: } > 494: > 495: Extra blank line? ------------- Marked as reviewed by rriggs (Committer). PR Review: https://git.openjdk.org/valhalla/pull/979#pullrequestreview-1849079545 PR Review Comment: https://git.openjdk.org/valhalla/pull/979#discussion_r1469841688 From vromero at openjdk.org Mon Jan 29 19:25:17 2024 From: vromero at openjdk.org (Vicente Romero) Date: Mon, 29 Jan 2024 19:25:17 GMT Subject: Integrated: 8194743: Compiler implementation for Statements before super() Message-ID: backporting JDK-8194743 to valhalla, not a clean backport though ------------- Commit messages: - 8194743: Compiler implementation for Statements before super() Changes: https://git.openjdk.org/valhalla/pull/981/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=981&range=00 Issue: https://bugs.openjdk.org/browse/JDK-8194743 Stats: 1542 lines in 29 files changed: 1155 ins; 273 del; 114 mod Patch: https://git.openjdk.org/valhalla/pull/981.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/981/head:pull/981 PR: https://git.openjdk.org/valhalla/pull/981 From vromero at openjdk.org Mon Jan 29 19:25:19 2024 From: vromero at openjdk.org (Vicente Romero) Date: Mon, 29 Jan 2024 19:25:19 GMT Subject: Integrated: 8194743: Compiler implementation for Statements before super() In-Reply-To: References: Message-ID: On Mon, 29 Jan 2024 19:12:52 GMT, Vicente Romero wrote: > backporting JDK-8194743 to valhalla, not a clean backport though This pull request has now been integrated. Changeset: 256f8ad7 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/256f8ad744637de3a731f873cba0105c4bb60bd3 Stats: 1542 lines in 29 files changed: 1155 ins; 273 del; 114 mod 8194743: Compiler implementation for Statements before super() ------------- PR: https://git.openjdk.org/valhalla/pull/981 From vromero at openjdk.org Mon Jan 29 20:25:59 2024 From: vromero at openjdk.org (Vicente Romero) Date: Mon, 29 Jan 2024 20:25:59 GMT Subject: Integrated: 8324864: [lworld] remove non-cyclic membership checks Message-ID: dropping code that was checking for non-cyclic membership for value classes with implicit constructors ------------- Commit messages: - 8324864: [lworld] remove non-cyclic membership checks Changes: https://git.openjdk.org/valhalla/pull/982/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=982&range=00 Issue: https://bugs.openjdk.org/browse/JDK-8324864 Stats: 101 lines in 4 files changed: 0 ins; 101 del; 0 mod Patch: https://git.openjdk.org/valhalla/pull/982.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/982/head:pull/982 PR: https://git.openjdk.org/valhalla/pull/982 From vromero at openjdk.org Mon Jan 29 20:25:59 2024 From: vromero at openjdk.org (Vicente Romero) Date: Mon, 29 Jan 2024 20:25:59 GMT Subject: Integrated: 8324864: [lworld] remove non-cyclic membership checks In-Reply-To: References: Message-ID: On Mon, 29 Jan 2024 20:19:47 GMT, Vicente Romero wrote: > dropping code that was checking for non-cyclic membership for value classes with implicit constructors This pull request has now been integrated. Changeset: aff06518 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/aff06518698db9fdc472872cd9b241420a0f770a Stats: 101 lines in 4 files changed: 0 ins; 101 del; 0 mod 8324864: [lworld] remove non-cyclic membership checks ------------- PR: https://git.openjdk.org/valhalla/pull/982 From mchung at openjdk.org Tue Jan 30 01:26:43 2024 From: mchung at openjdk.org (Mandy Chung) Date: Tue, 30 Jan 2024 01:26:43 GMT Subject: [lworld] RFR: 8324792: [lworld] remove VarHandle test cases for primitive value class In-Reply-To: References: <1lggCZ26cnRWcil8E4ol6XLUIWCYS0Qhj9jOhIX1GDc=.a285f588-2b01-4389-9afa-6dc405ec07d2@github.com> Message-ID: On Mon, 29 Jan 2024 16:16:51 GMT, Roger Riggs wrote: >> Clean up test/jdk/java/lang/invoke/VarHandles tests and remove tests for primitive value classes. Since `.class` is adequate to refer to the type, the template files are now updated to match the mainline with only small change for valhalla. > > test/jdk/java/lang/invoke/VarHandles/X-VarHandleTestAccess.java.template line 495: > >> 493: } >> 494: >> 495: > > Extra blank line? This extra blank line is in the mainline version. ------------- PR Review Comment: https://git.openjdk.org/valhalla/pull/979#discussion_r1470461195 From vromero at openjdk.org Tue Jan 30 17:05:03 2024 From: vromero at openjdk.org (Vicente Romero) Date: Tue, 30 Jan 2024 17:05:03 GMT Subject: Integrated: 8324980: [lworld] removing dead code plus gratuitous differences with jdk/master In-Reply-To: References: Message-ID: On Tue, 30 Jan 2024 16:53:45 GMT, Vicente Romero wrote: > removing differences originated from old, dead code, making preparations to simplify a future merge with jdk/master This pull request has now been integrated. Changeset: 969c4b7c Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/969c4b7c76d3b247b035f15afca1b17dcf126c4d Stats: 40 lines in 7 files changed: 1 ins; 24 del; 15 mod 8324980: [lworld] removing dead code plus gratuitous differences with jdk/master ------------- PR: https://git.openjdk.org/valhalla/pull/983 From vromero at openjdk.org Tue Jan 30 17:05:03 2024 From: vromero at openjdk.org (Vicente Romero) Date: Tue, 30 Jan 2024 17:05:03 GMT Subject: Integrated: 8324980: [lworld] removing dead code plus gratuitous differences with jdk/master Message-ID: removing differences originated from old, dead code, making preparations to simplify a future merge with jdk/master ------------- Commit messages: - 8324980: [lworld] removing dead code plus gratuitous differences with jdk/master Changes: https://git.openjdk.org/valhalla/pull/983/files Webrev: https://webrevs.openjdk.org/?repo=valhalla&pr=983&range=00 Issue: https://bugs.openjdk.org/browse/JDK-8324980 Stats: 40 lines in 7 files changed: 1 ins; 24 del; 15 mod Patch: https://git.openjdk.org/valhalla/pull/983.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/983/head:pull/983 PR: https://git.openjdk.org/valhalla/pull/983 From dsimms at openjdk.org Tue Jan 30 23:56:57 2024 From: dsimms at openjdk.org (David Simms) Date: Tue, 30 Jan 2024 23:56:57 GMT Subject: [lworld] RFR: Merge jdk Message-ID: <9FDj8KH84EMASTvOj7Q9xeeXeJ-y6rHOiAi5FELQjr0=.93aab126-e956-410d-99dc-c25a5d78806a@github.com> Merge jdk-22+21 ------------- Commit messages: - Use AccessFlags.SUPER / AccessFlags.IDENTITY depending on Valhalla enabled - Merge tag 'jdk-22+21' into lworld_merge_jdk_22_21 - 8317510: Change Windows debug symbol files naming to avoid losing info when an executable and a library share the same name - 8318613: ChoiceFormat patterns are not well tested - 8318186: ChoiceFormat inconsistency between applyPattern() and setChoices() - 8318487: Specification of the ListFormat.equals() method can be improved - 8317360: Missing null checks in JfrCheckpointManager and JfrStringPool initialization routines - 8318735: RISC-V: Enable related hotspot tests run on riscv - 8318727: Enable parallelism in vmTestbase/vm/gc/concurrent tests - 8318607: Enable parallelism in vmTestbase/nsk/stress/jni tests - ... and 116 more: https://git.openjdk.org/valhalla/compare/16fa7709...78d07697 The webrevs contain the adjustments done while merging with regards to each parent branch: - lworld: https://webrevs.openjdk.org/?repo=valhalla&pr=984&range=00.0 - jdk: https://webrevs.openjdk.org/?repo=valhalla&pr=984&range=00.1 Changes: https://git.openjdk.org/valhalla/pull/984/files Stats: 13971 lines in 451 files changed: 8742 ins; 3026 del; 2203 mod Patch: https://git.openjdk.org/valhalla/pull/984.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/984/head:pull/984 PR: https://git.openjdk.org/valhalla/pull/984 From dsimms at openjdk.org Tue Jan 30 23:59:16 2024 From: dsimms at openjdk.org (David Simms) Date: Tue, 30 Jan 2024 23:59:16 GMT Subject: [lworld] Integrated: Merge jdk In-Reply-To: <9FDj8KH84EMASTvOj7Q9xeeXeJ-y6rHOiAi5FELQjr0=.93aab126-e956-410d-99dc-c25a5d78806a@github.com> References: <9FDj8KH84EMASTvOj7Q9xeeXeJ-y6rHOiAi5FELQjr0=.93aab126-e956-410d-99dc-c25a5d78806a@github.com> Message-ID: On Tue, 30 Jan 2024 23:49:32 GMT, David Simms wrote: > Merge jdk-22+21 This pull request has now been integrated. Changeset: 80604148 Author: David Simms URL: https://git.openjdk.org/valhalla/commit/80604148390307c04b2881e8e680cdaabcb196e1 Stats: 13971 lines in 451 files changed: 8742 ins; 3026 del; 2203 mod Merge jdk Merge jdk-22+21 ------------- PR: https://git.openjdk.org/valhalla/pull/984 From vromero at openjdk.org Wed Jan 31 03:28:41 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 31 Jan 2024 03:28:41 GMT Subject: RFR: Merge lworld Message-ID: <1VbPNtvoyV6lPzB5OsydIEeDN9_J-3uCez0ftY1pfr4=.45da81a7-8a82-42c0-9a94-9588b3be1ff8@github.com> Merge branch 'lworld' into jep_401_javac_merge_lworld ------------- Commit messages: - Merge branch 'lworld' into jep_401_javac_merge_lworld - Merge jdk - 8317510: Change Windows debug symbol files naming to avoid losing info when an executable and a library share the same name - 8318613: ChoiceFormat patterns are not well tested - 8318186: ChoiceFormat inconsistency between applyPattern() and setChoices() - 8318487: Specification of the ListFormat.equals() method can be improved - 8317360: Missing null checks in JfrCheckpointManager and JfrStringPool initialization routines - 8318735: RISC-V: Enable related hotspot tests run on riscv - 8318727: Enable parallelism in vmTestbase/vm/gc/concurrent tests - 8318607: Enable parallelism in vmTestbase/nsk/stress/jni tests - ... and 116 more: https://git.openjdk.org/valhalla/compare/969c4b7c...7a973fce The webrevs contain the adjustments done while merging with regards to each parent branch: - jep_401_javac: https://webrevs.openjdk.org/?repo=valhalla&pr=985&range=00.0 - lworld: https://webrevs.openjdk.org/?repo=valhalla&pr=985&range=00.1 Changes: https://git.openjdk.org/valhalla/pull/985/files Stats: 13971 lines in 451 files changed: 8742 ins; 3026 del; 2203 mod Patch: https://git.openjdk.org/valhalla/pull/985.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/985/head:pull/985 PR: https://git.openjdk.org/valhalla/pull/985 From vromero at openjdk.org Wed Jan 31 03:41:42 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 31 Jan 2024 03:41:42 GMT Subject: RFR: Merge lworld [v2] In-Reply-To: <1VbPNtvoyV6lPzB5OsydIEeDN9_J-3uCez0ftY1pfr4=.45da81a7-8a82-42c0-9a94-9588b3be1ff8@github.com> References: <1VbPNtvoyV6lPzB5OsydIEeDN9_J-3uCez0ftY1pfr4=.45da81a7-8a82-42c0-9a94-9588b3be1ff8@github.com> Message-ID: <0_CI_k8GzgW8kdC2lXg06ftg_Ntd_VmHF7UzzSmxiIk=.e89c3f84-7c08-4f5b-b1a9-4e9986b7314d@github.com> > Merge branch 'lworld' into jep_401_javac_merge_lworld Vicente Romero has updated the pull request with a new target base due to a merge or a rebase. The incremental webrev excludes the unrelated changes brought in by the merge/rebase. The pull request contains eight additional commits since the last revision: - Merge branch 'lworld' into jep_401_javac_merge_lworld - 8324980: [lworld] removing dead code plus gratuitous differences with jdk/master - 8324864: [lworld] remove non-cyclic membership checks - 8194743: Compiler implementation for Statements before super() - 8324787: [lworld] eliminate restrictions on abstract value classes and additional refactorings - 8324784: [lworld] remove constant propagation code and tests - 8324643: [lworld] remove ACC_VALUE flag and the 'identity' modifier - 8324052: [lworld] remove experimental code from lworld ------------- Changes: - all: https://git.openjdk.org/valhalla/pull/985/files - new: https://git.openjdk.org/valhalla/pull/985/files/7a973fce..7a973fce Webrevs: - full: https://webrevs.openjdk.org/?repo=valhalla&pr=985&range=01 - incr: https://webrevs.openjdk.org/?repo=valhalla&pr=985&range=00-01 Stats: 0 lines in 0 files changed: 0 ins; 0 del; 0 mod Patch: https://git.openjdk.org/valhalla/pull/985.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/985/head:pull/985 PR: https://git.openjdk.org/valhalla/pull/985 From vromero at openjdk.org Wed Jan 31 03:41:43 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 31 Jan 2024 03:41:43 GMT Subject: Integrated: Merge lworld In-Reply-To: <1VbPNtvoyV6lPzB5OsydIEeDN9_J-3uCez0ftY1pfr4=.45da81a7-8a82-42c0-9a94-9588b3be1ff8@github.com> References: <1VbPNtvoyV6lPzB5OsydIEeDN9_J-3uCez0ftY1pfr4=.45da81a7-8a82-42c0-9a94-9588b3be1ff8@github.com> Message-ID: <1TXnJiRAFBMXaxFYj1RPqkR0hXMKl4zALqsuzFEOLHs=.a220c4ed-1067-40dc-8cf6-23958768c576@github.com> On Wed, 31 Jan 2024 03:21:55 GMT, Vicente Romero wrote: > Merge branch 'lworld' into jep_401_javac_merge_lworld This pull request has now been integrated. Changeset: c400ce3c Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/c400ce3ce6051414a50b11006994a8eb2c5b8000 Stats: 13971 lines in 451 files changed: 8742 ins; 3026 del; 2203 mod Merge lworld ------------- PR: https://git.openjdk.org/valhalla/pull/985 From dsimms at openjdk.org Wed Jan 31 09:23:59 2024 From: dsimms at openjdk.org (David Simms) Date: Wed, 31 Jan 2024 09:23:59 GMT Subject: [lworld] Integrated: Merge jdk Message-ID: Merge jdk-22+22 + jdk-22+23 ------------- Commit messages: - Merge jdk - 8319318: bufferedStream fixed case can be removed - 8319556: Harmonize interface formatting in the FFM API - 8319378: Spec for j.util.Timer::purge and j.util.Timer::cancel could be improved - 8315680: java/lang/ref/ReachabilityFenceTest.java should run with -Xbatch - 8305814: Update Xalan Java to 2.7.3 - 8319573: Change to Visual Studio 17.6.5 for building on Windows at Oracle - 8319338: tools/jpackage/share/RuntimeImageTest.java fails with -XX:+UseZGC - 8319436: Proxy.newProxyInstance throws NPE if loader is null and interface not visible from class loader - 8314891: Additional Zip64 extra header validation - ... and 146 more: https://git.openjdk.org/valhalla/compare/80604148...7a208678 The webrevs contain the adjustments done while merging with regards to each parent branch: - lworld: https://webrevs.openjdk.org/?repo=valhalla&pr=986&range=00.0 - jdk: https://webrevs.openjdk.org/?repo=valhalla&pr=986&range=00.1 Changes: https://git.openjdk.org/valhalla/pull/986/files Stats: 40174 lines in 1489 files changed: 21005 ins; 7536 del; 11633 mod Patch: https://git.openjdk.org/valhalla/pull/986.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/986/head:pull/986 PR: https://git.openjdk.org/valhalla/pull/986 From dsimms at openjdk.org Wed Jan 31 09:24:01 2024 From: dsimms at openjdk.org (David Simms) Date: Wed, 31 Jan 2024 09:24:01 GMT Subject: [lworld] Integrated: Merge jdk In-Reply-To: References: Message-ID: On Wed, 31 Jan 2024 09:10:18 GMT, David Simms wrote: > Merge jdk-22+22 + jdk-22+23 This pull request has now been integrated. Changeset: e5818b97 Author: David Simms URL: https://git.openjdk.org/valhalla/commit/e5818b976bece335be3427dd08ba1971db21a8e9 Stats: 40174 lines in 1489 files changed: 21005 ins; 7536 del; 11633 mod Merge jdk Merge jdk-22+22 + jdk-22+23 ------------- PR: https://git.openjdk.org/valhalla/pull/986 From vromero at openjdk.org Wed Jan 31 15:23:22 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 31 Jan 2024 15:23:22 GMT Subject: Integrated: Merge lworld Message-ID: <_lT5uKFTuv_PbL7u1d6K9QPsc6MsooqAzQNCMRFOYqY=.18673d75-ca5c-4dd3-ac64-89e1ac553def@github.com> Merge branch 'lworld' into lw5_merge_lworld # Conflicts: # src/jdk.compiler/share/classes/com/sun/tools/javac/code/Flags.java # src/jdk.compiler/share/classes/com/sun/tools/javac/parser/JavacParser.java ------------- Commit messages: - Merge branch 'lworld' into lw5_merge_lworld - Merge lworld - Merge lworld - Merge lworld - Merge lworld - Merge lworld - 8318117: [lw5] create a switch for null-restricted types - 8316628: [lw5] remove vnew, aconst_init, and withfield - Merge lworld - 8316561: [lw5] class file attribute NullRestricted shouldn't be generated for arrays - ... and 29 more: https://git.openjdk.org/valhalla/compare/80604148...f0b3a0d1 The merge commit only contains trivial merges, so no merge-specific webrevs have been generated. Changes: https://git.openjdk.org/valhalla/pull/987/files Stats: 8227 lines in 292 files changed: 4456 ins; 2613 del; 1158 mod Patch: https://git.openjdk.org/valhalla/pull/987.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/987/head:pull/987 PR: https://git.openjdk.org/valhalla/pull/987 From vromero at openjdk.org Wed Jan 31 15:23:24 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 31 Jan 2024 15:23:24 GMT Subject: Integrated: Merge lworld In-Reply-To: <_lT5uKFTuv_PbL7u1d6K9QPsc6MsooqAzQNCMRFOYqY=.18673d75-ca5c-4dd3-ac64-89e1ac553def@github.com> References: <_lT5uKFTuv_PbL7u1d6K9QPsc6MsooqAzQNCMRFOYqY=.18673d75-ca5c-4dd3-ac64-89e1ac553def@github.com> Message-ID: On Wed, 31 Jan 2024 15:13:03 GMT, Vicente Romero wrote: > Merge branch 'lworld' into lw5_merge_lworld > # Conflicts: > # src/jdk.compiler/share/classes/com/sun/tools/javac/code/Flags.java > # src/jdk.compiler/share/classes/com/sun/tools/javac/parser/JavacParser.java This pull request has now been integrated. Changeset: 1e7c8398 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/1e7c83989effa9922fadde7de80dd56465b65620 Stats: 48823 lines in 1507 files changed: 29436 ins; 10026 del; 9361 mod Merge lworld ------------- PR: https://git.openjdk.org/valhalla/pull/987 From mchung at openjdk.org Wed Jan 31 17:32:22 2024 From: mchung at openjdk.org (Mandy Chung) Date: Wed, 31 Jan 2024 17:32:22 GMT Subject: [lworld] Integrated: 8324792: [lworld] remove VarHandle test cases for primitive value class In-Reply-To: <1lggCZ26cnRWcil8E4ol6XLUIWCYS0Qhj9jOhIX1GDc=.a285f588-2b01-4389-9afa-6dc405ec07d2@github.com> References: <1lggCZ26cnRWcil8E4ol6XLUIWCYS0Qhj9jOhIX1GDc=.a285f588-2b01-4389-9afa-6dc405ec07d2@github.com> Message-ID: On Sat, 27 Jan 2024 00:23:35 GMT, Mandy Chung wrote: > Clean up test/jdk/java/lang/invoke/VarHandles tests and remove tests for primitive value classes. Since `.class` is adequate to refer to the type, the template files are now updated to match the mainline with only small change for valhalla. This pull request has now been integrated. Changeset: a65faa23 Author: Mandy Chung URL: https://git.openjdk.org/valhalla/commit/a65faa23dc6e74823c33878a1596b82e1b807059 Stats: 6767 lines in 40 files changed: 23 ins; 4416 del; 2328 mod 8324792: [lworld] remove VarHandle test cases for primitive value class Reviewed-by: rriggs ------------- PR: https://git.openjdk.org/valhalla/pull/979 From vromero at openjdk.org Wed Jan 31 18:30:20 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 31 Jan 2024 18:30:20 GMT Subject: Integrated: Merge lworld In-Reply-To: <3keRHq-XSHUt6aRvNFsyAOLGICYRgicUm07OlmrYKLs=.2ff13639-960f-4fa3-b09e-99288484164f@github.com> References: <3keRHq-XSHUt6aRvNFsyAOLGICYRgicUm07OlmrYKLs=.2ff13639-960f-4fa3-b09e-99288484164f@github.com> Message-ID: On Wed, 31 Jan 2024 18:22:31 GMT, Vicente Romero wrote: > Merge branch 'lworld' into jep_401_javac_merge_lworld > # Conflicts: > # src/jdk.compiler/share/classes/com/sun/tools/javac/code/Preview.java > # src/jdk.compiler/share/classes/com/sun/tools/javac/code/Source.java This pull request has now been integrated. Changeset: 99b62c02 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/99b62c027d48f6ecb7b0fdb01737e651b2394f3d Stats: 40148 lines in 1487 files changed: 20999 ins; 7523 del; 11626 mod Merge lworld ------------- PR: https://git.openjdk.org/valhalla/pull/988 From vromero at openjdk.org Wed Jan 31 18:30:18 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 31 Jan 2024 18:30:18 GMT Subject: Integrated: Merge lworld Message-ID: <3keRHq-XSHUt6aRvNFsyAOLGICYRgicUm07OlmrYKLs=.2ff13639-960f-4fa3-b09e-99288484164f@github.com> Merge branch 'lworld' into jep_401_javac_merge_lworld # Conflicts: # src/jdk.compiler/share/classes/com/sun/tools/javac/code/Preview.java # src/jdk.compiler/share/classes/com/sun/tools/javac/code/Source.java ------------- Commit messages: - Merge branch 'lworld' into jep_401_javac_merge_lworld - Merge lworld - 8324980: [lworld] removing dead code plus gratuitous differences with jdk/master - 8324864: [lworld] remove non-cyclic membership checks - 8194743: Compiler implementation for Statements before super() - 8324787: [lworld] eliminate restrictions on abstract value classes and additional refactorings - 8324784: [lworld] remove constant propagation code and tests - 8324643: [lworld] remove ACC_VALUE flag and the 'identity' modifier - 8324052: [lworld] remove experimental code from lworld The merge commit only contains trivial merges, so no merge-specific webrevs have been generated. Changes: https://git.openjdk.org/valhalla/pull/988/files Stats: 7473 lines in 257 files changed: 1915 ins; 4818 del; 740 mod Patch: https://git.openjdk.org/valhalla/pull/988.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/988/head:pull/988 PR: https://git.openjdk.org/valhalla/pull/988 From vromero at openjdk.org Wed Jan 31 18:52:30 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 31 Jan 2024 18:52:30 GMT Subject: Integrated: Merge lworld Message-ID: <0X-YYO5kW45CpNy0Of6OjXdcAdBQgfAwj5dEcggF-As=.86e48c40-20cd-447d-8bc8-f60c3b564317@github.com> Merge branch 'lworld' into lw5_merge_lworld ------------- Commit messages: - Merge branch 'lworld' into lw5_merge_lworld - Merge jdk - 8319318: bufferedStream fixed case can be removed - 8319556: Harmonize interface formatting in the FFM API - 8319378: Spec for j.util.Timer::purge and j.util.Timer::cancel could be improved - 8315680: java/lang/ref/ReachabilityFenceTest.java should run with -Xbatch - 8305814: Update Xalan Java to 2.7.3 - 8319573: Change to Visual Studio 17.6.5 for building on Windows at Oracle - 8319338: tools/jpackage/share/RuntimeImageTest.java fails with -XX:+UseZGC - 8319436: Proxy.newProxyInstance throws NPE if loader is null and interface not visible from class loader - ... and 147 more: https://git.openjdk.org/valhalla/compare/1e7c8398...79946f51 The webrevs contain the adjustments done while merging with regards to each parent branch: - lw5: https://webrevs.openjdk.org/?repo=valhalla&pr=989&range=00.0 - lworld: https://webrevs.openjdk.org/?repo=valhalla&pr=989&range=00.1 Changes: https://git.openjdk.org/valhalla/pull/989/files Stats: 40174 lines in 1489 files changed: 21005 ins; 7536 del; 11633 mod Patch: https://git.openjdk.org/valhalla/pull/989.diff Fetch: git fetch https://git.openjdk.org/valhalla.git pull/989/head:pull/989 PR: https://git.openjdk.org/valhalla/pull/989 From vromero at openjdk.org Wed Jan 31 18:52:31 2024 From: vromero at openjdk.org (Vicente Romero) Date: Wed, 31 Jan 2024 18:52:31 GMT Subject: Integrated: Merge lworld In-Reply-To: <0X-YYO5kW45CpNy0Of6OjXdcAdBQgfAwj5dEcggF-As=.86e48c40-20cd-447d-8bc8-f60c3b564317@github.com> References: <0X-YYO5kW45CpNy0Of6OjXdcAdBQgfAwj5dEcggF-As=.86e48c40-20cd-447d-8bc8-f60c3b564317@github.com> Message-ID: On Wed, 31 Jan 2024 18:45:26 GMT, Vicente Romero wrote: > Merge branch 'lworld' into lw5_merge_lworld This pull request has now been integrated. Changeset: d1879439 Author: Vicente Romero URL: https://git.openjdk.org/valhalla/commit/d187943908c3b053553a05d51e8aa5ddcf05c373 Stats: 40174 lines in 1489 files changed: 21005 ins; 7536 del; 11633 mod Merge lworld ------------- PR: https://git.openjdk.org/valhalla/pull/989