Nullable types and inference
Brian Goetz
brian.goetz at oracle.com
Mon Apr 29 19:51:22 UTC 2019
> The first case is a corner case and for generics, it works until someone try to stuff null into a value type.
>
> So instead introducing nullable value types in the language which make the language far more complex than it should be, i think we should come up with a far simpler proposal, to have a declaration site tagging of the method that doesn't work with value types.
>
>
> // proposed syntax
> interface Map<K, V> {
> "this method doesn't work if V is a value type" public V get(Object o);
> }
We explored this idea in M3; we jokingly called this “#ifref”, which is to say, we would restrict members to the reference specialization. This was our first attempt at dealing with this problem. We gave up on this for a number of reasons, not least of which was that it really started to fall apart when you had more than one type variable. But it was a hack, and only filtered out the otherwise-unavoidable NPEs.
More generally, there’s lots of generic code out here that assumes that null is a member of the value set of any type variable T, that you can stuff nulls in arrays of T, etc. Allowing users to instantiate arbitrary generics with values and hope for no NPEs (or expect authors of all those libraries to audit and annotate their libraries) is not going to leave developers with a feeling of safety and stability.
Further, allowing users to instantiate an ArrayList<Point> — even when the author of ArrayList proposes up and down (including on behalf of all their subtypes!) that it won’t stuff a null into T — will cause code to silently change its behavior (and maybe its descriptor) when ArrayList is later specialized. This puts pressure on our migration story; we want the migration of ArrayList to be compatible, and that means that things don’t subtly break when you recompile them. Using ArrayList<V?> today means that even when ArrayList is specialized, this source utterance won’t change its semantics.
Essentially, erased generics type variables have an implicit bound of “T extends Nullable”; the migration from erased to specialized is what allows the declaration to drop the implicit bound, and have the compiler type-check the validity of it.
We have four choices here:
- Don’t allow erased generics to be instantiated with values at all. This sucks so badly we won’t even discuss it.
- Require generics to certify their value-readiness, which means that their type parameters are non nullable. This risks degenerating into the first, and will be a significant impediment to the use and adoption of values.
- Let users instantiate erased generics with values, and let them blow up when the inevitable null comes along. That’s what you’re proposing.
- Bring nullity into the type system, so that we can accurately enforce the implicit constraint of today’s erased generics. That’s what I’m proposing.
I sympathize with your concern that this is adding a lot of complexity. Ultimately, though, I don’t think just letting people blindly instantiate generics that can’t be proven to conform to their bounds is not helping users either. Better suggestions welcome!
(A related concern is that V? looks too much like ? extends V, especially in the face of multiple tvars: Map<V?, ? extends U>. This may have a syntactic solution.).
More information about the valhalla-spec-observers
mailing list