[External] : Re: User model stacking

Kevin Bourrillion kevinb at google.com
Thu Apr 28 13:51:07 UTC 2022


On Thu, Apr 28, 2022 at 9:09 AM Remi Forax <forax at univ-mlv.fr> wrote:

 So we’re suggesting restacking towards:
>
> - Value classes are those without identity
> - Value classes can be atomic or non-atomic, the default is atomic (safe
> by default)
> - Value classes can further opt into having a "val" projection (name TBD,
> val is probably not it)
> - Val projections are non-nullable, zero-default — this is the only
> difference
> - Both the ref and val projections inherit the atomicity constraints of
> the class, making atomicity mostly orthogonal to ref/val/zero/null
>
>
> Now that the model is clearer, let's try to discuss about the val
> projection.
>

(For the record, I don't think the messages of the last 48 hours have made
the model "clearer", just floated a lot of possibilities.)

But I do want to say I appreciate you providing all these opposing
arguments to my proposal (which I asked for!).

I'm going to engage with your specific arguments, but I don't recall if you
ever engaged properly with all of mine. I feel like if you took them into
account also, your overall position might be more balanced? In particular,
it is a *huge* simplification to be able to say that every class does the
exact same thing, and some just do extra.


Once we have universal generics, we will have an issue with value type with
> zero-default, there are a lot of API in the JDK that explicitly specify
> that they return/pass null as parameter,
> by example, Map.get(), for those call, we need a way to say that the type
> is not T but T | null.
> The current proposal is to use T.ref for that.
>

Yes, for comparison, in the JSpecify nullness project we've found we can't
avoid needing to support type projections in both directions for type
variables. In this context, for now we can just call those `T.val` and
`T.ref`.

I'll note, though, that there will always be some methods that were
designed in an older world that won't be a super fantastic experience to
use anymore; many `Map.get()` users will feel compelled to switch to
`Map.getOrDefault()`, and I think we'll have to be okay with some of that.


Now, Kevin and Brian thinks that for zero-default value type, in the
> language, Complex.val should be used instead of Complex.
> Lets see how it goes
> 1/ There is a difference between Foo and Foo.ref for generics, Foo is a
> class while Foo.ref is a type.
>     The idea of using Complex.val means that the relationship is reversed,
>      Complex is the type and Complex.val is the class.
>

Not how I would put it, no.

In the world of classes, there is only `Complex`.

In the world of types, there is the type you're used to getting, `Complex`.
And there is a second type `Complex.val`.

The main trouble is that Java developers are not 100% comfortable with /
accustomed to thinking about the difference between classes and types. I
think they get it more than they *think* they do, but they wouldn't be able
to explain.

java.lang.Class will confuse some people. There will be both a
`Complex.class` and a `Complex.val.class`. I'm currently thinking it should
work similarly to the difference between `Complex.class` and
`Complex[].class`: one actually represents *the class*, which gets loaded
and initialized; the other is a special type that gets composed out of the
first one. You can navigate between the two. We have no precedent for two
`Class` instances that represent the exact same class, but there are three
different precedents for there being "extra" `Class` instances beyond just
one-per-class: `String[].class`, `int.class` -- and even `void.class` which
has nothing to do with any class *or* any type.


     If we ride with that horse, it means that in universal generics, we
> should not use T but T.val apart when we want T.val | null that can be
> spelled T.
>

I'm not following, but again I think I'm naively assuming a type variable
might need to be projected in either direction.


 2/ Because Complex.val is a class and Complex is a type, we have a weird
> dis-symmetry,
>      User will declare a class Complex, but to create a Complex, they will
> have to use new Complex.val().
>      As a user this is weird.
>

The class name in a CICE isn't a type usage, just a class name. It should
always be just `new Complex()`. That should produce a value of type
`Complex.val` so that it can be trivially assigned to either kind of
variable.


 3/ This may change but currently, Foo.class exists but Foo.ref.class is
> not allowed, you have to use a method to get the projection,
>      something like Foo.class.getMeTheProjectionPlease().
>      With .val being the default, it means that Complex.val.class exists
> while Complex.class does not.
>      Same to get the default value, Complex.class.getDefaultValue() will
> not compile, it should be Complex.val.class.getDefaultValue().
>      Again weird.
>

But *why* is it weird?



>  4/ It's a double opt-in, people have to opt-in at declaration site by
> asking for a zero-default value type but that is not enough,
>      it only works if the type val is uses at use site. I don't know any
> feature in Java that requires a double opt-in.
>

You have to opt into a class being subclassable, then you have to opt into
subclassing it.
There's tons of examples.

I'm not sure that's a good framing anyway. The use-site doesn't really opt
in or out. The class just opts in to generating two types. Now there are
two types and clients use those types however they want.



>  5/ It's easy to forget a ".val". To work, people will have to pepper .val
> everywhere and it will be easy to miss one occurrence.
>      Depending on where the ".val" is missed, performance will suffer.
>

People can come back and purchase that better performance for the price of
dealing with the safety hazards. imho, this is exactly as it should be.

-- 
Kevin Bourrillion | Java Librarian | Google, Inc. | kevinb at google.com


More information about the valhalla-spec-observers mailing list