Why is it not supported to create References for value objects?
David Alayachew
davidalayachew at gmail.com
Fri Dec 12 22:40:10 UTC 2025
I don't agree that #4 is the worst option -- in fact, I think it is the
best of the 4. Though, my opinion might change with a prototype to play
with.
But regardless, I do agree that some sort of warning would be good.
Preferrably a JFR Event. Like Glavo said -- there are a lot of catch-all's
out there, so having the event **as well as** the exception sounds best to
me.
On Fri, Dec 12, 2025, 4:17 PM Glavo <zjx001202 at gmail.com> wrote:
> Hi Brian,
>
> This answer was unexpectedly simple to me.
>
> Of course, I understand the common usage of WR. I clearly know that the
> semantics I mentioned, while allowing some code to work, do not align with
> the original intent of using WR in those contexts.
>
> However, I believe throwing an exception is the worst option—far worse
> than letting the WR turn into a strong reference.
> This is because users have long had the obligation to handle OOM; they can
> proactively trigger OOM for testing by reducing heap size or other means.
> In contrast, users have never had the obligation to handle
> IdentityException, nor any way to test it. As a result, an unexpected
> exception could lead the program into untested code paths.
> I even think throwing an exception is worse than directly letting the VM
> crash, because some catch blocks might silently swallow the
> IdentityException, making it much harder for users to understand why their
> program behaved abnormally.
>
> For me, a more ideal approach would be to print a warning by default when
> creating a WR for a value object—at least this would make it easier for
> developers to notice the issue.
>
> Glavo
>
> On Sat, Dec 13, 2025 at 2:39 AM Brian Goetz <brian.goetz at oracle.com>
> wrote:
>
>> Indeed, many hours of discussion went into this decision.
>>
>> The basic problem is that all of the obvious answers are either
>> surprising or surprisingly expensive. We considered the following four
>> approaches:
>>
>> 1. Allow `new WR(value)`, which is cleared on birth.
>> 2. Allow `new WR(value)`, which is never cleared.
>> 3. Allow `new WR(value)`, which is cleared when all identities reachable
>> through the value are become weakly reachable.
>> 4. Throw, and encourage implementations that are built on WR (such as
>> WHM) to offer their own ways of dealing with values.
>>
>> You can readily see how (1) would not be what anyone expects.
>>
>> You are arguing for (2). While this initially seems credible, its
>> primary appeal is "yeah it's useless, but it won't break things that just
>> throw everything in a WHM". But it is actually worse than useless! The
>> purposes of WRs is to not unnecessarily pin things in memory. But a WR
>> that is never cleared does exactly that; if the referent holds identities,
>> then it effectively becomes a strong reference.
>>
>> (3) is a more principled answer, but is very expensive to implement, and
>> its still not clear that this is what people will expect.
>>
>> (4) is honest, if inconvenient. Given that the majority of uses of WR
>> are through higher-level constructs like WHM, which have more flexibility
>> to choose the semantics that is right for their more restricted domain, it
>> made sense to make this a WHM (and friends) problem than a WR problem
>> (given that there were no good answers at the WR level.)
>>
>> On 12/12/2025 10:02 AM, Glavo wrote:
>>
>> Hi,
>>
>> In the current draft of JEP 401, I saw the following statement:
>>
>> > The garbage collection APIs in java.lang.ref and java.util.WeakHashMap
>> do not allow developers to manually manage value objects in the heap.
>> > Attempts to create Reference objects for value objects throw
>> IdentityException at run time.
>>
>> We could clearly have pretended that all value objects exist forever and
>> are never collected, so that Reference or WeakHashMap would still work with
>> value objects.
>> Obviously, doing so would break far fewer existing codes, whereas having
>> them directly throw IdentityException would break a lot more code.
>>
>> As an analogy, I think this is very similar to ThreadLocal on virtual
>> threads. Although virtual threads make many use cases of ThreadLocal
>> low-performance and high-memory,
>> Project Loom did not outright forbid users from using ThreadLocal on
>> virtual threads, yet Project Valhalla chose to break this (admittedly
>> inefficient) writing style.
>>
>> I don’t understand why Project Valhalla made this choice. Has there been
>> any related discussion in the past? I’m really eager to know the reasoning
>> behind this decision.
>>
>> Glavo
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/valhalla-dev/attachments/20251212/f8c6ff1a/attachment-0001.htm>
More information about the valhalla-dev
mailing list