Null channels (was: User model stacking)
Remi Forax
forax at univ-mlv.fr
Tue May 3 22:52:21 UTC 2022
----- Original Message -----
> From: "Brian Goetz" <brian.goetz at oracle.com>
[...]
>
> What I'm trying to do here is decomplect flattening from nullity. Right
> now, we have an unfortunate interaction which both makes certain
> combinations impossible, and makes the user model harder to reason about.
>
> Identity-freedom unlocks flattening in the stack (calling convention.)
> The lesson of that exercise (which was somewhat surprising, but good) is
> that nullity is mostly a non-issue here -- we can treat the nullity
> information as just being an extra state component when scalarizing,
> with some straightforward fixups when we adapt between direct and
> indirect representations. This is great, because we're not asking users
> to choose between nullability and flattening; users pick the combination
> of { identity, nullability } they want, and they get the best flattening
> we can give:
>
> case (identity, _) -> 1; // no flattening
> case (non-identity, non-nullable) -> nFields; // scalarize fields
> case (non-identity, nullable) -> nFields + 1; // scalarize fields
> with extra null channel
>
> Asking for nullability on top of non-identity means only that there is a
> little more "footprint" in the calling convention, but not a qualitative
> difference. That's good.
>
> In the heap, it is a different story. What unlocks flattening in the
> heap (in addition to identity-freedom) is some permission for
> _non-atomicity_ of loads and stores. For sufficiently simple classes
> (e.g., one int field) this is a non-issue, but because loads and stores
> of references must be atomic (at least, according to the current JMM),
> references to wide values (B2 and B3.ref) cannot be flattened as much as
> B3.val. There are various tricks we can do (e.g., stuffing two 32 bit
> fields into a 64 bit atomic) to increase the number of classes that can
> get good flattening, but it hits a wall much faster than "primitives".
>
> What I'd like is for the flattening story on the heap and the stack to
> be as similar as possible. Imagine, for a moment, that tearing was not
> an issue. Then where we would be in the heap is the same story as
> above: no flattening for identity classes, scalarization in the heap for
> non-nullable values, and scalarization with an extra boolean field
> (maybe, same set of potential optimizations as on the stack) for
> nullable values. This is very desirable, because it is so much easier
> to reason about:
>
> - non-identity unlocks scalarization on the stack
> - non-atomicity unlocks flattening in the heap
> - in both, ref-ness / nullity means maybe an extra byte of footprint
> compared to the baseline
>
> (with additional opportunistic optimizations that let us get more
> flattening / better footprint in various special cases, such as very
> small values.)
yes, choosing (non-)identity x (non-)nullability x (non-)atomicity at declaration site makes the performance model easier to understand.
At declaration site, there are still nullability x atomicity with .ref and volatile respectively.
I agree with John that being able to declare array items volatile is missing but i believe it's an Array 2.0 feature.
Once we get universal generics, what we win is that not only ArrayList<int> is compact on heap but ArrayList<Integer> too.
Rémi
More information about the valhalla-spec-observers
mailing list