Value Types in Object-Oriented Programming Languages

Simon Ochsenreither simon at ochsenreither.de
Fri Jul 17 20:14:49 UTC 2015


Hi Beate,

the thesis looks pretty interesting!

I think some of the definitions in the beginning are debatable, but I guess
regardless of how you define things, someone will complain one way or another.
:-)
(E .g. OOP and FP are incompatible, OOP requires mutable state, ...)

By the way, there is the return type missing in the code example on p. 139.

Value types in your model have bigger restrictions, like referential
transparency, no reference types inside value types (which makes them
practically pointer-free), no passing or accessing references in value type
methods, ... which feels like a larger split introduced into the language, with
a mutable, reference-passing model on the one side and an immutable
value-passing model on the other side.

The automatic validation with isValid is something everyone would love to do I
think, but the biggest issue is the difficulty of providing a sane behavior when
you create values out of thin air, e.g . with new Array[SomeValueType](100) //
substitute with equivalent Java syntax
As far as I see, nullability is only mentioned in the "open questions" part, but
I feel that differences in handling null can lead to quite different value type
designs down the road.

Reducing identity/equality to one operation looks like a great simplification,
if it can be made to work ... something which I have trouble believing.
I guess it really depends on how you define the requirements for equality in
your model, and whether you provide a predefined (and maybe not user-overridable
implementation) for such operations.

With such a large distinction, it's always a question how cumbersome bridging
this divide becomes in practice.

Achieving compatibility with different value types with inclusion functions
restricted to VT -> VT operations looks like a very constrained notion of
implicit conversions (which you mention in the paper). From experience, I'm not
sure that this subset is actually quite useful. In Scala this subset of
implicits is the least useful one and mainly kept for Java compatibility
("implicit widening conversions"). The issue is that these conversions don't
scale. Even if Foo -> Bar can be made to work, it's always the question of what
happens with Box[Foo] -> Box[Bar]. Either it doesn't work, then it's confusing
and inconsistent, and if it was supposed to work, the whole variance complexity
enters the debate.

Overall, I think there more commonalities than differences between the VTs
described in your thesis and what's being implemented, which I guess is a good
thing!

Nice thesis!

Bye,

Simon


More information about the valhalla-dev mailing list