Equality for values -- new analysis, same conclusion
John Rose
john.r.rose at oracle.com
Sat Aug 10 18:57:59 UTC 2019
Good analysis. I’m going to comment in part because I want us to be especially clear about definitions. I also have a few additions, naturally. At the end I try to distill our options down to 3 or 4 cases, A/B/C/D.
On Aug 9, 2019, at 8:46 AM, Brian Goetz <brian.goetz at oracle.com> wrote:
>
> Time to take another look at equality, now that we’ve simplified away the LFoo/QFoo distinction. This mail focuses on the language notion of equality (==) only.
Something implicit in this analysis is that there are really only two possible syntactic occurrences of operator== as it relates to an inline type: the statically typed and the dynamically typed. We are calling them V== and Object==; sometimes we say val==, ref==, etc. By context V is either a specific inline type, or else is a stand-in for all inline types, in a statement that applies to them all. I suppose val== is clearer than V== in the latter case, so I’ll say val== when I’m talking about all inline types.
The term Object== is unambiguous, as the occurrence of operator== when at least one operand is a non-inline, non-primitive type, such as an interface.
When we talk about generics we might say T==, where T stands for some type parameter, but I think that’s a trap, because we immediately have to say whether we are talking about erased vs. specialized generics, and also it’s conceivable that the bound type of T might affect the interpretation of the operator== sign.
So, there are two kinds of dynamically typed code: (1) code which works on values under a top type, either Object or an Interface (or some other abstract type, if we go there), and (2) code which works on values under some type variable T *where T is erased not specialized*. There are two kinds of statically typed code: (3) code which works on values under their own type V, and (4) code which uses a type variable T bound to V *using specialization*. Today we work with all types but (4), which comes in with specialized generics.
It’s a good simplifying assumption that operator== means one thing in both dynamic cases, and the other thing in both static cases. We could try to weasel in a difference between (1) and (2), but that spoils a refactoring which replaces T by a subtype of its bound. Same point for (3) and (4). I hope it’s not necessary to go there, even though we are in a tight spot.
If we buy the above classifications, then we are really talking about static== and dynamic==, under the guise of val== and Object==. This is useful to remember when we go back and think about primitive prim== (int==, float==, etc.) which are similar to val== but, naturally, have to differ in various details due to 25-year-old decisions.
(It’s not premature for me to mention primitives at this point, since they are one of the two design centers in the normative slogan, “codes like a class, works like an int”. More in a bit.)
> Let’s start with the simplest case, the ==(V,V) operator. There is a range of possible interpretations:
>
> - Not allowed; the compiler treats == as not applicable to operands of type V. (Note that since V <: Object, == may still be called upon to have an opinion about two Vs whose static types are Object or interface.)
> - Allowed, but always false. (This appeals to a concept of “aggressive reboxing”, where a value is reboxed between every pair of byte codes.)
> - Weak substitutability. This is where we make a “good faith” attempt to treat equal points as equal, but there are cases (such as those where a value hides behind an Object/interface) where two otherwise equal objects might report not equal. This would have to appeal to some notion of invisible boxing, where sometimes two boxes for the same value are not equal.
> - Substitutability. This is where we extend == field wise over the fields of the object, potentially recursively.
Let me venture to give some very specific names to these operations, at the risk of derailing the discussion into bikeshed painting.
"ID=="
Objects.identityEquals(o1, o2) - Returns true if o1 and o2 both have the same object identity, or if o1 and o2 are both null. Returns false for non-identity objects, even if o1 SAME== o2.
Then we also have Objects.hasIdentity(o) -> o != null && identityEquals(o, o); (Cf. Math.isNaN for the x==x move.)
“SAME=="
Objects.substitutabilityEquals(o1, o2) - Returns true if and only if o1 and o2 are substitutable for each other. Either Objects.identityEquals or else both are the same inline type V with recursively “SAME==“ fields. (In the recursion step, “SAME==“ applies to primitives in the same way that Object.equals applies to their wrappers. Since arrays have object identity, there’s no recursion into arrays.)
Since ID== and SAME== differ only for inline types, they mean the same thing in today’s Java, but differ in Valhalla.
“FAST=="
Unsafe.fastEqualsCheck(o1, o2) - Must return false if SAME== would return false. May return false even if SAME== would return true. Attempts to return true when it can do so efficiently, on a best efforts basis. JITs may transform to more or less sensitive tests, or even to constant false, based on optimization context. The typical implementation is raw pointer comparison, which is very fast, but sometimes fails to detect identical inline values buffered in different places.
There’s no need for FAST== today, since ID== is very fast. If ID== gets slower maybe we want FAST== as a backup.
These “==“ operations are not syntactic but semantic, hence the different (upper-case) notation. The syntax operators val==, Object== have to be associated with one of those semantic operations.
> As noted above, we don’t only have to define ==V, but ==Object when there may be a value hiding behind the object. It might be acceptable, though clearly weird, for two values that are ==V to not be ==Object when viewed as Objects. However, the only way this might make sense to users is if this were appealing to a boxing conversion, and its hard to say there’s a boxing conversion from V to Object when V <: Object. There is a gravitational force that says that if two values are V==, then they should still be == when viewed as Object or Comparable.
>
> Let’s take a look at the use cases for Object==.
>
> - Direct identity comparison. This is used for objects that are known to be interned (such as interned strings or Enum constants), as well as algorithms that want to compare objects by identity. such as IdentityHashMap. (When the operands are of generic (T) or dynamic (Object) type, the “Interned” case is less credible, but the other cases are still credible.)
Under the assumption that all operands are identity-objects, ID== and SAME== produce the same results and can be replaced secretly by FAST== (if the JIT can speculate or prove that assumption).
If the JIT can’t tell what’s going on, then both ID== and SAME== work the same, since the slow path of SAME== never gets executed. There’s a small runtime cost for detecting non-ID objects (inlines).
> - As a fast path for deeper equality comparisons (a == b || a.equals(b)), since the contract of equals() requires that == objects are equals().
This is what I call L.I.F.E., the Legacy Idiom For Equality. ID== is good here too. FAST== would be fine here, and a JIT could perform that strength reduction if it notices that the == expression is post-dominated by Object.equals (on the false path). I think that’s usually detectable.
If we define Object== as SAME==, then the optimization challenge becomes urgent, since the user has written the moral equivalent of “v.equals(v2) || v.equals(v2)” which appears to duplicate effort. Oops. But the same optimization as for ID== applies: Strength-reduce the first part of L.I.F.E. to FAST== (and maybe then to constant false, FALSE==). In that case, the equals call can happen even if the two inputs are SAME==, so the JIT would have to ensue that this is a valid transform: The V.equals method must not have side effects and must only consult fields of V (and recursive uses of SAME== or other safe equals methods) to make its decisions. This requires putting constraints on certain equals methods, which we could do either by fiat or by analysis. Either way, we should do it.
> - In comparing references against null.
This is easy to optimize to FAST==, since the JIT almost certainly can see the null.
(So FAST== wins a lot in these cases, but it can do so secretly, without the user mentioning Unsafe methods, or burning the FAST== semantics into a public spec.)
> - In comparing references against a known sentinel value, such as a value already observed, or a sentinel value provided by the user.
This can be optimized if something is statically known about the sentinel. If it is ID-laden, then FAST== is sufficient. If the sentinel is a known inline type, then it’s an instanceof check plus a monomorphic SAME== (for a particular V). If the sentinel is of unknown type, then polymorphic techniques can reduce the cost, but it’s potentially an expensive SAME==, if that’s the semantics we demand.
So this scenario splits into a statically-known sentinel and a statically-unknown one. (Maybe Brian only meant the statically-known case. But the other shows up too. More in a second.)
> When generics are specialized, T== will specialize too, so when T specializes to a value, we will get V==, and when T is erased, we will get Object==.
For an erased generic, the cases above come into play, since it’s really just another instance of dynamic code.
OK, so Object== has three candidates: FAST==, ID==, SAME==.
SAME== cleanly extends the notion of “is the same object” to non-identity objects, using the concept of substitutability. I think we’d prefer that outcome.
(Note: The term “same” is the relevant term in the JLS and JVMS today, applying a substitutability test to both primitives and identity-laden objects. “Substitutable” means the same thing as “same” but with more syllables. Sometimes you want more syllables, sometimes fewer.)
FAST== would inject a lot of uncertainty into Java semantics, so I say let’s leave it to JITs as a secret weapon.
ID== is plausible: It is optimizable in the same places SAME== can be optimized, and its slow path is still fast. But it creates strange behaviors, such as “x != x”.
SAME== is often optimizable to FAST==. In a few cases we’d reach the slow path and see a performance drop, in code that works like this:
boolean goofyNonLifeContains(List<Object> xs, Object x) { for (var x1 : xs) if (x1 == x) return true; return false; }
(Note: This is non-LIFE code. ArrayList.contains can optimize SAME== and ID== to FAST==, by leaning on the semantics of the Object.equals call in the LIFE.)
Here we don’t know what x is, necessarily, and so if we say that Object== must translate to SAME==, we have an optimization challenge. The code will probably try to speculate that x is an ID-object (or null) and run a special loop that lowers SAME== to FAST==, with another copy of the loop that “knows” x is an inline object. Some library code may adapt to this new reality manually the way it does with null today (see ArrayList::indexOf for an example). Or not: Code with a proper LIFE can rely on the JIT to clean things up.
I think that *only these cases* of variable sentinels or algorithms involving “raw” pointers compares will put demands on Object== that might discourage us from what we know we should do, which is define Object== as SAME==.
I think it’s worth it. I hope I won’t eat my words, with angry customers showing us their code falling into the slow path of SAME== on some miserably polymorphic code. Let’s watch for this… The fallback position would ID==.
(Note: We haven’t said anything yet about val==. We are about to.)
> Suptyping is a powerful constraint; it says that a value is-a Object. While it is theoretically possible to say that v1==v2 does not imply that ((Object)v1 == (Object) v2), I think we’ll have a very hard time suggesting this with a straight face. (If, instead, the conversion from value to Object were a straight boxing conversion, this would become credible.) Which says to me that if we define == on values at all, then it must be consistent with == on object or interface types.
It’s not just theory, it’s practice, so maybe this is overstating this case, a little. IDEs guide users away from the second expression (for many types, like String and Integer), offering to replace it with a call to v1.equals(v2). In the presence of primitives (the “works like an int” half of values), the above implication is routinely false for values outside of the valueOf cache (abs(v) > 256). So there’s no code base out there that does the “cast to object and compare” trick for “works like an int” types, today.
For today's “codes like a class” entities, what you say is 100% true. And the non-boxing relation V <: Object is on the “like a class” side of the ledger. Still, I think we could do the above with a straight face if it was part of a user model that split the difference between int and class, making clear which aspects of V are “like an int” and which “like a class”. See more thoughts below.
> Similarly, the fact that we want to migrate erased generics to specialized, where T== will degenerate to V== on specialization, suggests that having Object== and V== be consistent is a strong normalizing force.
I think the root phenomenon here is that dynamic code and static code with similar syntax should have similar semantics. This link is made in the ways Brian points out: On the “codes like a class” side of the design, V <: Object entails that expressions valid under the higher type should be valid under the lower type also. And migration from erased to specialized also moves code from Object down to some specific V, where it should have the same meaning.
So Object== (the friend of dynamic code, at the top of the type hierarchy) should do the same as little val== (as viewed in static code).
On the other hand, V must also “work like an int”. And int== is not the same as Integer== (hence Brian’s note that boxing gives us some extra wiggle room in our design).
(I should also point out that int+ is not the same as Object+. We shouldn’t expect that migration to specialized expressions involving obj+obj to work without a hitch. So op== isn’t unique in threatening automagic migration of generics.)
> Having == not be allowed on values at all would surely be strange, since == is (mostly) a substitutibilty test on primitives, and values are supposed to “work like an int.” And, even if we disallowed == on values, one could always cast the value to an Object, and compare them. While this is not an outright indefensible position, it is going to be an uncomfortable one.
The goal is surely to find the least uncomfortable position, and I expect discomfort on operator== no matter what. (I’ll explain why in a moment.)
I do think we we could defend a *temporary* ban on val== until we gain more data, to confirm SAME== or some other option.
I also think, in any case, we want a named method like v.isSame(v1), to make it clear that a full SAME== test is intended. People could use that instead of == while we evaluated our options further.
OTOH, I think we are close, now, to being able to make a solid decision.
> Having V== always be false still does not seem like something we can offer with a straight face, again, citing “works like an int.”
Designer: “See, for consistency with Object==, we define val== as ID==.”
User: “You mean, you *know* it’s an identity-free type and you are looking for its *object identity*??”
Designer: “It seemed like a consistent thing to do. Let me get back to you…”
(Java has lots of precedent for statically rejecting stupid expressions, like ((String)x) instanceof Integer. Using ID== to compare non-id objects is deeply stupid, so should be rejected.)
> Having V== be “weak substitutability” is possible, but I don’t think it would make the VM people happy anyway. Most values won’t require recursive comparisons (since most fields of value types will be statically typed as primitives, refs, or values), but much of the cost is in having the split at all.
Yep. And the JIT can get most of the benefit of explicit FAST== secretly, by strength reduction in common cases.
> Note too that treating == as substitutibility means use cases such as IdentityHashMap will just work as expected, with no modification for a value-full world.
That’s a nice point. The effect of SAME== (with a suitable System.identityHashCode) is to make every distinct inline value appear to be interned to unique object identity, as long as you don’t look too closely (e.g., by synchronizing).
> So if V <: Object, it feels we are still being “boxed” into the corner that == is a substitutability test. But, in generic / dynamically typed code, we are likely to discourage broad use of Object==, since the most common case (fast path comparison) is no long as fast as it once was.
Generally speaking, I agree we should write up some warnings about Object==. Happily, the JIT’s secret use of FAST== is a safety net for customers that ignore our warnings. And simple stuff like “x == null” or “x == y || x.equals(y)” won’t lose performance, given some L.I.F.E. support in the JIT.
As far as Object== goes in today’s usages, ID== would be an acceptable alternative to SAME==, since the two differ only for value types. But (as Brian says) if Object== is bound to ID==, and val== *must not* be bound to ID==, then we have a schism between static and dynamic views of every V. This schism would be painful. What would it gain? We would avoid running the slow path of SAME==, which is the cost that ID== doesn’t have. But I think I’ve shown that this slow path is not a risk for most uses of Object==.
So, ID== doesn’t buy much over SAME==, and therefore Object== and val== don’t need a schism. But if I’m wrong, and we need to fall back to ID==, then we will want to reconsider making a schism between dynamic and static views.
> We have a few other options to mitigate the performance concerns here:
>
> - Live with legacy ACMP anomalies;
I.e., don’t change ACMP, so it turns out to be FAST==, or change it to ID== but not SAME==.
This play makes ACMP into a third independent design point, besides Object== and val==, and makes it differ from both.
Currently, Object== compiles to ACMP, so this would suggest that ACMP should be upgraded to the new behavior of Object==, which is SAME== (or as a fallback ID==). But Object== could compile to a method call, if we want to decouple ACMP from Object==.
> - Re-explore a boxing relationship between V and Object.
This is a different take on “codes like a class, works like an int”. (More below.)
> If we say that == is substitutability, we still have the option to translate == to something other than ACMP. Which means that existing binaries (and likely, binaries recompiled with —source 8) will still use ACMP. If we give ACMP the “false if value” interpretation, then existing classifies (which mostly use == as a fast-path check) will still work, as those tests should be backed up with .equals(), though they may suffer performance changes on recompilation. This is an uncomfortable compromise, but is worth considering. Down this route, ACMP has a much narrower portfolio, as we would not use it in translating most Object== unless we were sure we were dealing with identityful types.
I like the idea of limiting the use of ACMP to ID==, *if it pays off*.
Or, we could deprecate and remove ACMP, in favor of method intrinsics for ID== and SAME==.
We could keep ACMP as FAST==, period. But I think that’s too unpredictable; ID== looks safer to me even for legacy code. As noted above, the JIT will often strength-reduce ACMP to FAST==, if it can prove the move is valid. So there’s little upside to mandating FAST== in the JVMS.
So the good candidates for ACMP are SAME== and ID==.
I think maybe we want performance runs on a JVM which is configured with a switch to bind ACMP to SAME== and ID==, and see what is the downside for SAME==. Surely it’s very small today, since inline instances are not yet common. We can run artificial workloads with a number of inline types flowing through generic code. (In fact, we already have started such analyses, IIRC.)
> The alternate route to preserving a narrower definition of == is to say that _at the language level_, values are not subtypes of Object. Then, we can credibly say that the eclair companion type is the box, and there is a boxing conversion between V and I (putting the cream in the eclair is like putting it in a box.) This may seem like a huge step backwards, but it actually is a consistent world, and in this world, boxing is a super-lightweight operation. The main concern here is that when a user assigns a value to an object/interface type, and then invokes Object.getClass(), they will see the value class — which perhaps we can present as “the runtime box is so light that you can’t even see it.”
So, this a bold move, which changes the balance between “codes like a class” and “works like an int”.
In fact I am convinced that the tension between those two propositions is where all of our discomfort comes from (about == and many things). The question about val== vs. Object== stresses precisely those points where int and Object have incompatible protocols. Our thesis requires us to split the difference between Object== and int== in order to assign a meaning to val== that is compatible enough with both sides.
How do we split the difference? Do we align closely with “codes like a class”, avoiding schism between val== and Object==? Or do we align closely with “works like an int”, and embrace boxing as a transform that allows such as schism? Is there a third way that makes obvious sense for inline types (even int, eventually)? I think we should shoot for a third way, if we can find one. That seems to offer the best security against importing the limitations of the past into the future.
> Where this world runs into more trouble is with specialized generics; we’d like to treat specialized Foo<T> as being generic in “T extends Object”, which subsumes values. This complicates things like bound computation and type inference, and also makes invoking Object methods trickier, since we have to do some sort of reasoning by parts (which we did in M3, but didn’t like it.)
Yikes, I don’t understand these particular issues fully, but they sound scary.
Part of the job of splitting the difference between int-nature and Object-nature involves deciding what to do with box/unbox conversions (which are part of int-nature). The “gravity” of the situation prompts us to click into a tight orbit around either “boxes and unboxes like an int” or “subtypes Object like a class”. Maybe there’s a halfway (Lagrange) point out there somewhere, such as “unboxes like an int and subtypes Object like a class”, where unboxing artifacts show up (as a special form of cast) but not boxing artifacts. Such a new point would compromise the user models of int and Object. We’ve looked hard for such a thing with not much luck on the problem of operator==, but I think it’s worth trying a little longer to come up with it. (Especially if it could make sense of float==, but I’m not holding my breath for that. See below for more thoughts about float==.)
Looking very closely at the current cycle of int-box-Integer-unbox-int, maybe we can limit the irregularities (the schism mentioned above) in some way that allows us to “blame” them on the unbox step. After all, the “codes like a class” part only implies one direction of type conversion, from some V to Object or V.Box (the eclair interface). Going the other way, from V.Box to V, is a pure “works like an int” move, since Java does not allow implicit down-casting, only unboxing conversions. And we can view V.Box-to-V as an unbox *as well as* (or maybe *instead of*) a down-cast.
Does this buy us anything? Consider the parallel case of Integer (motivated by “works like an int”):
Integer == Integer => ID==
int == Integer => SAME==
Integer == int => SAME==
int == int => SAME==
(Here, I’m relying on the fact that int== is an instance of SAME==. That’s true except for float== and double==.)
This suggests a new category of syntactic context for operator==, V.Box. So we have val==, val.box==, and Object==. For int, SAME== makes sense for all of them, although we might want to back off Object== to ID== (see above). Since val.box <: Object, we might expect that operator== must be the same for both, *but no*. The operator== for the box is referred to the prim== for the primitive, if at least one primitive is present. It looks like our job got more complicated. And the gravitational attraction between val== and Object== is more entangled.
What is the place (in the above setup) of a schism between val== and Object==? It would be that two ID-types (Integer, Object, V.Box) compared with operator== would bind to Object==, hence SAME== (or ID== as a backup). A mix of V-types and ID-types (or pure V-types) would bind to V==, hence SAME== (and *not* ID==).
The relation V <: V.Box preserves the SAME== semantics of V as long as any given v1==v2 has at least one “witness” of V (not V.Box). The relation V.Box preserves the SAME== semantics of V, unless we have to back down to ID== (for reasons discussed above).
The choice of val== vs. Object== is uncomfortable here, but it’s a feature of today’s language. Clearly we wish to improve it, but we are not mandated to fix it, under the slogan “codes like a class works like an int”, which simply requires us to make a new compromise between existing compromises.
So I’ll repeat that SAME== saves us a lot of trouble, since it is already compatible with prim== and Object== (except for float== and double==). If we can use SAME== (except for double== and float==) we win.
If we are forced to back down to ID== (as a binding for Object==), then where and how do we transition from ID== (as Object==) to SAME== (as val==)? I would say that a reasonable level of discomfort would be to emulate today’s rules about X==Y:
- If either X or Y is an inline type (or primitive) then use val== which is SAME== (or prim==, which is usually SAME==).
- If either X or Y is a super of V.Box (such as Object) then use Object== which is ID== (as a fallback).
- if both X and Y are V.Box then also use Object== (today’s rule!)
So, today’s Integer, applied to operator==, does not always fall up to Object==. Sometimes it falls down to int==, if an int witness is present. If we replicate that rule for inlines, we obtain an uncomfortable mix between “codes like a class” (can jump up to Object) and “works like an int” (can unbox down to a primitive, which is not like an Object).
If we don’t want to replicate that model, we have to create a new one and sell *that* to users. So my point here, I guess, is that if we move towards a “box-centric” design for separating val== from Object==, as Brian suggests, there are still shades of gray between V and Object (namely V.Box).
> tl;dr: if we want a unified type system where values are objects, then I think we have to take the obvious semantics for ==, and if we want to reduce the runtime impact on _old_ binaries, we should consider whether giving older binaries older semantics, and taking the discontinuity as the cost of unification.
So far we have avoided the vexed issue of float== and double==, which is yet another kind of equality. *If* we were to somehow nullify the gravitational entanglement of val== with Object==, and define val==(x,y) as something different from Object==(x,y) (i.e., (Object)x==(Object)y), *then* we could safely define val== in a way that can extend to cover float== (which isn’t even an equality comparison, since it isn’t reflexive on NaN). *But* the costs of this are (1) not being able to define any val== until we have an operator overloading story, *and* (2) having a permanent schism between Object== and V== for many V types, not just float and double.
Personally, I think the cost seems high for this. But, it provides (to me) a slight hint that we could *delay* defining val==, forcing early access users to say what they mean, using a method call to compare their values in statically typed code, while we consider the matter further. This is a toe-dip into the waters of Brian’s "Re-explore a boxing relationship between V and Object” but without going all the way in.
To summarize, I see three stable positions:
A. Make Object== and val== bind to SAME==, using JIT optimizations to the hilt. (At this point, bind ACMP to SAME== also.) Then only float== and double== are outliers to be dealt with later using ad hoc rules.
B. Make Object== be SAME== and val== be an overloaded operator, usually SAME== (except for float, double, and IEEE-like user types). Overloading can be introduced later; we start by hardwiring val== as SAME== but plan to unlock it to users.
C. Make Object== be ID== and val== be an overloaded operator as in B. This is motivated as a fall-back from B, if we get evidence that the slow path of SAME== is burdensome for dynamic code.
In *all cases* we need named entry points for SAME== and ID== (not FAST==) so users can say what they mean. I think SAME== should have bindings for both Object (dynamic typing) and every V (static typing).
We can choose A and later extend it to B.
In addition, there is a fourth unstable position which may be temporarily desirable:
D. Do not define val== as yet, but rather supply symbolic methods for ID== and SAME==. Let users say what they mean. Define Object== as ID== or even FAST== (undefined result) as a temporary measure. See what users say, and adjust the model accordingly to one of A or B above.
We can choose D and later extend it to any of A, B, or C.
This was very long; I hope it helps.
— John
More information about the valhalla-spec-observers
mailing list