Reference implementation

Reinier Zwitserloot reinier at zwitserloot.com
Thu Oct 29 07:08:32 PDT 2009


Actually, those 3 examples don't seem too bad for future expansion.  
Sure, reification is a hard nut to crack, but it always has been.

Specifically:

Java5 allowing unchecked generics casts isn't a big issue, as far as I  
can see. A hypothetical future version of java that does have  
reification (call it javaX) can migrate via either of two options:

1. Use a different mechanic for casting-with-checking. Presumably  
javaX has a new type concept (something like super type tokens but  
less unwieldly), so instead of writing (List<String>)someList;, you  
might write List<String>.type.cast(someList);  The nature of generics  
casting is inherently complicated, so the notion that this syntax  
isn't like the other java syntax is simply unavoidable; where without  
generics there's only one casting concept, with generics there are at  
least 2, so at least one of them is going to not be like the status  
quo. Example:

  List<?> list =  
Collections.<Object>unmodifiableList(Arrays.<Object>asList(1, 2, 3, 4,  
5));
  List<Integer> integerList = (List<Integer>)list;

With reification, a cast-with-reification-check would fail, as the  
runtime type of 'list' is List<Object>, which isn't compatible with  
List<Integer>. However, as a practical matter, coercing 'list' into  
the List<Integer> type couldn't possibly go wrong: It does actually  
contain only integers, and it's unmodifiable*. Thus, the existing  
concept of coercing the generics bounds remains a useful operation,  
but it's different from casting in a reified java.

Thus, the argument that 'List<String>.type.cast(someList)' doesn't fit  
existing java syntax is valid, but there won't be a solution that  
uniformly fits anyway, so it's a moot point.

2. Due to other incompatibilities between reified and unreified  
generics, java-with-reification is syntax-wise never going to play  
well with unreified generics. The solution is to put a 'style  
reified;' or 'version X;' at the top of your java sources to indicate  
you want the meaning of <bounds-go-here>, casts, etc to mean: reified  
behaviour. This solution will work for just about everything  
(including full-complex vs. complex vs. simple resolution of the  
diamond operator), of course, but the serious issues with reification  
back when Java5 was released means there's a good reason to go that  
far. The relative difficulty between getting the simple vs. complex  
vs. full-complex issue sorted out right now aren't even in the same  
ballpark.


The wildcard issue also refers to generics, and clearly you need them.  
Only if someone raised this issue when generics were designed, AND  
offered a viable alternative, would the wildcard issue be analogous to  
this issue.


I'm not familiar enough with captured types, but my experience with  
generics is that when inference works right, your code is obvious (and  
I have a _really_ hard time imagining how any smarter inference system  
is going to break existing code, given that the inferencer is so poor  
today), and I take it that backwards compatibility doesn't extend to  
keeping the compiler error messages the same between releases of javac.

I don't see how intersection types interfere with future java  
improvements. Also, when intersection types were designed, did anyone  
raise this issue AND offer a viable alternative? Without both of those  
conditions being true, it clearly does not parallel the current  
situation.


If you want a good example of how a past java design decision gets in  
the way of improvements now, look no further than overloading. Half  
the new features I think up for Project Lombok only work with a rider  
that you can't overload your methods. (Examples: default parameters,  
named parameters, method reference literals, closure-by-method-ref).

Having said all of this, if you assert that your current plan for  
implementing this feature WILL be future compatible with a better  
inference style with no unfortunate gotchas, then this entire  
discussion seems somewhat moot :P

  --Reinier Zwitserloot

*) Presuming that the original list reference is tossed, so that no  
code can add non-integers to this list.


On 2009/29/10, at 09:48, Maurizio Cimadamore wrote:

> Neal Gafter wrote:
>> On Wed, Oct 28, 2009 at 1:41 PM, Jonathan Gibbons
>> <Jonathan.Gibbons at sun.com>wrote:
>>
>>
>>> In the short term, we think that the current ("simple") approach  
>>> is the
>>> way to go. Long term, we think we will need something like the
>>> "full-complex" approach that Maurizio has described. That will  
>>> address the
>>> need for argument inference. However, we do not have the resources  
>>> to fully
>>> investigate that solution at this point.
>>>
>>>
>>
>> I don't believe that the full-complex approach is both  
>> implementable and
>> upward compatible with the simple approach.  However, it hasn't been
>> specified in any detail, so it isn't possible for a demonstration  
>> to be made
>> either way.
> Neal, the complex-full approach is essentially your approach plus some
> magic to make the following work:
>
> Foo<Object> = new Foo<>(1);
>
> That is, a complex approach that takes into account the expected  
> return
> type in order not to infer a type which is too specific. Such an
> approach would be compatible with the currently implemented simple
> approach (in fact, ANY other approach that takes into consideration  
> the
> expected return type would be compatible with the simple approach).
>
> I see your point about choosing between usability and language
> evolution. To be honest this seems quite a small price to pay compared
> to other choices that have been made for Java w.r.t. to language
> evolution. To name 3:
>
> 1) support for generic cast w/o reification and raw types --> have  
> made
> reification nearly impossible
> 2) wildcards --> despite their usefulness when writing library methods
> they have made the type-system undecidable (well we don't have a full
> proof yet, but certainly we have bugs)
> 3) addiction of synthetic types (captured/intersection types) for  
> better
> type-inference --> lead to infinite types and, again, makes it really
> hard to discuss seriously about reification
>
> In other words I don't think that  the current implementation of  
> diamond
> should be blamed for putting Java into the corner. As a side-issue
> consider also that the complex approach, and any other inference
> approach taking into account actual argument types, has the  
> ''drawback''
> of inferring captured types as possible instantiation for a generic
> type. E.g.
>
> void m(List<?> ls) {
>   Foo<?> f = new Foo<>(ls.head); //inferred as Foo<#1>, where #1
> extends Object
> }
>
> Again I don't think that is is particularly good in term of language
> evolution - what does it mean to create an instance of a generic type
> parameterized with a captured type? What should the runtime  
> semantics of
> Foo<#1> be (assuming you have reification) ?
>
> The simple approach is not corner case-free - there are  
> circumstances in
> which simple infers an intersection type, e.g. if the class
> type-variable has multiple bounds - but it is definitively better than
> having to support intersection types AND captured types at the same  
> type.
>
> Maurizio
>
>> I'm afraid that the currently integrated approach really is
>> painting us into a corner.  The "complex" approach is more clearly  
>> upward
>> compatible.  If there are insufficient resources to fully  
>> investigate, the
>> conservative solution would be to implement the "complex" approach  
>> (or none
>> at all).
>>
>> I find it remarkable that multi-catch was rejected from Coin for  
>> veering too
>> near the type system (even though it doesn't effect the type  
>> system), while
>> at the same time decisions about generic type inference are being  
>> included
>> in Coin even though we lack the resources to fully investigate their
>> long-term impact.
>>
>> -Neal
>>
>>
>
>




More information about the coin-dev mailing list