Value Types in Object-Oriented Programming Languages

B. Ritterbach ritterbach at web.de
Sat Jul 18 09:56:17 UTC 2015


Hi Simon,

thanks for taking a look at my thesis so fast and in so much detail.
I try some answers, point by point.


 >I think some of the definitions in the beginning are debatable,
 > but I guess regardless of how you define things, someone will 
complain one way or another. :-)
Yes, that's right, it seems virtually impossible to find a common base 
that everybody would be happy with.
The use even of (seemingly) basis concepts it too heterogeneous within 
the literature.
That's why I needed these definitions.  They are merely meant to make 
clear clear what I mean by these words _within the thesis_.


 > By the way, there is the return type missing in the code example on 
p. 139.
oops, you're right, had not noticed until now.


 > Value types in your model have bigger restrictions,
 > like referential transparency,
that's true
 > no reference types inside value types (which makes them practically 
pointer-free),
not exactly.
forbidding references within value types is a sufficient, yet not 
necessary condition to ensure "value-likeness".
There are ways to allow object types within the implementation of value 
types and still find language rules that guarantee the characteristic 
behavior of values. They would make the language more complex, but some 
practical use cases may require less tight restrictions.

 > ... which feels like a larger split introduced into the language,
yes, that is the core idea: a two-fold type system, which strictly 
separates stateful abstractions (objects) and stateless abstractions 
(values), and as well stateful operations (methods) versus stateless 
operations (functions).


 >The automatic validation with isValid is something everyone would love 
to do I think,
 > but the biggest issue is the difficulty of providing a sane behavior 
when you create values out of thin air,
 > e.g . with new Array[SomeValueType](100)
If you "create" a value - or rather, select a value from the value 
universe - you need to know which one.
In the case of an Array with 100 elements, this means you would have to 
specify each element.
However, you can always use a "mutable companion", a kind of builder 
object that builds the "value" (or rather, its object equivalent) step 
by step, and when finished, turns it  into the "real" value.


 > As far as I see, nullability is only mentioned in the "open 
questions" part,
 > but I feel that differences in handling null can lead to quite 
different value type designs down the road.
yes, I agree, nullability is a big issue.
I wonder whether nullability must necessarily be handled by the type 
system (as many languages do, e .g. MayBe-types in Haskell, etc.).
Probably nullability could be designed as a property of a variable 
(similar to mutability): some variable are defined as nullable, others 
as non-nullable.  I also wonder, whether variables of object types 
should always be nullable and variables of value types always 
non-nullabe (as is the case with obects types vs. primitive types in 
Java). There are some use cases for variables of a value type which are 
nullable, and also for variables of an object type which are non-nullable.


 > Reducing identity/equality to one operation looks like a great 
simplification,
 > if it can be made to work ... something which I have trouble believing.
I tried to argue that it is possible (chapter 8 of the thesis) - based 
on the assumption that values types and object types are strictly 
separated, and currently, it looks quite ok.  Objections welcome!


 > I guess it really depends on how you define the requirements for 
equality in your model
The requirements for value equality are stated by the (modified) 
equality contract (and it also covers object identity).
The (modified) equality contract is close to the Java/ Scala equality 
contract, with slightly more restrictive conditions,
and it serves as a unique (and implementation independent) specification 
for equality (and identity, respectively).
I'll circle a paper about the modified contract in the next mail.

 > and whether you provide a predefined (and maybe not user-overridable 
implementation) for such operations.
Object identity can be predefined by the language (actually, today 
nearly every OO language does support object identity).
For value equality, a standard behavior (that works for many value 
types) can be predefined by the language.
If additionally the language provides a mechanism for "overriding" value 
equality for cases where the predefined value equality is not 
appropriate, this would enable sticking to a single comparison operation.
(However, for a language that already has 2 or more comparisons, like == 
and equals in Java), for reasons of upward compatibility it would be 
cumbersome to switch to that kind of equality/ identity model.)


Achieving compatibility with different value types with inclusion 
functions restricted to VT -> VT operations looks like a very 
constrained notion of implicit conversions (which you mention in the 
paper).
 >  an inclusion function is meant as a means for establishing a subtype 
relation between two value types, and it should be applied only if value 
type B actually IS a subtype of value type A.  That does not necessarily 
mean that other (less restrictive) implicit conversions do not exist in 
the language - only that they are not suitable, and not meant for, 
establishing a subtype relation, and thus be treated differently.

 >Overall, I think there more commonalities than differences between the 
VTs described in your thesis
 >and what's being implemented, which I guess is a good thing!
Yes, that's something am really happy about.
When we started working on values types (roughly 10 years ago) it looked 
like something outlandish, very few people were interested in that 
subject. Now value types (see project Valhalla), and more generally, 
adding slightly more "functional" elements to OO languages, seems very 
up to date.
I recently had a look at Swift, With classes versus structures Swift has 
a two-fold type systems, which comes close to the 
object-value-separation we advocate.  I'll dig into Swift closer as soon 
as i can. There are some details I really want to understand.


Bye,

Beate



On 17.07.2015 22:14, Simon Ochsenreither wrote:
> Hi Beate,
>
> the thesis looks pretty interesting!
>
> I think some of the definitions in the beginning are debatable, but I 
> guess regardless of how you define things, someone will complain one 
> way or another. :-)
> (E .g. OOP and FP are incompatible, OOP requires mutable state, ...)
>
> By the way, there is the return type missing in the code example on p. 
> 139.
>
> Value types in your model have bigger restrictions, like referential 
> transparency, no reference types inside value types (which makes them 
> practically pointer-free), no passing or accessing references in value 
> type methods, ... which feels like a larger split introduced into the 
> language, with a mutable, reference-passing model on the one side and 
> an immutable value-passing model on the other side.
>
> The automatic validation with isValid is something everyone would love 
> to do I think, but the biggest issue is the difficulty of providing a 
> sane behavior when you create values out of thin air, e.g . with new 
> Array[SomeValueType](100) // substitute with equivalent Java syntax
> As far as I see, nullability is only mentioned in the "open questions" 
> part, but I feel that differences in handling null can lead to quite 
> different value type designs down the road.
>
> Reducing identity/equality to one operation looks like a great 
> simplification, if it can be made to work ... something which I have 
> trouble believing.
> I guess it really depends on how you define the requirements for 
> equality in your model, and whether you provide a predefined (and 
> maybe not user-overridable implementation) for such operations.
>
> With such a large distinction, it's always a question how cumbersome 
> bridging this divide becomes in practice.
>
> Achieving compatibility with different value types with inclusion 
> functions restricted to VT -> VT operations looks like a very 
> constrained notion of implicit conversions (which you mention in the 
> paper). From experience, I'm not sure that this subset is actually 
> quite useful. In Scala this subset of implicits is the least useful 
> one and mainly kept for Java compatibility ("implicit widening 
> conversions"). The issue is that these conversions don't scale. Even 
> if Foo -> Bar can be made to work, it's always the question of what 
> happens with Box[Foo] -> Box[Bar]. Either it doesn't work, then it's 
> confusing and inconsistent, and if it was supposed to work, the whole 
> variance complexity enters the debate.
>
> Overall, I think there more commonalities than differences between the 
> VTs described in your thesis and what's being implemented, which I 
> guess is a good thing!
>
> Nice thesis!
>
> Bye,
>
> Simon




More information about the valhalla-dev mailing list