More Typing Problems
Maurizio Cimadamore
maurizio.cimadamore at oracle.com
Tue Sep 7 06:16:36 PDT 2010
On 07/09/10 13:38, Ming-Yee Iu wrote:
> The point I'm trying to make is not that I don't understand why the
> code doesn't work. I'm mostly pointing out that the proposed type
> inference algorithm for Java7 is uncompetitive with C#'s type
> inference algorithm. This style of code works fine in C#. Poor type
> inferencing will make writing LINQish queries in Java7 more verbose
> and error-prone.
>
> The type inferencer should either "just work" (there are well-defined
> situations when you can leave out types) or it "doesn't work" (when
> using lambdas, you should put types everywhere or you'll get obscure
> error messages). These sorts of errors suggests that the type
> inferencer falls into the "sometimes works" category, meaning that in
> practice, programmers will end up putting types everywhere in their
> lambdas when working with generics (i.e. it "doesn't work").
>
Hi
point taken - however your comments do not directly refer to the changes
implemented in the lambda branch; I read some kind of general discomfort
in the way in which Java type-inference work (as currently specified in
the JLS); so, while in principle I could agree with you, I think that
we're being a bit off-topic here...
> Even worse, right now the compiler is blaming the programmer for
> writing incorrect code when it is, in fact, a failure of the type
> inferencer that is causing the compilation problems.
>
>
Well, a compiler error message is what it is - it can be ugly (and some
involving inference really are) it can be cryptic, but please, no
'blame' intended ;-)
I think that your comments suggest that the compiler might do a better
job in rejecting your program with an harsh error message like:
'cyclic type-inference'
instead of trying to infer a type that can potentially cause other
type-errors downstream.
My counter argument is that there are a lot of situations in which
compiler guesses in pathological cases are actually correct and you
don't need to add explicit types in those cases; should we start to
reject them too (because they're pathological?)
Note that in your original code you are calling a method, namely
'select' whose signature is:
<U> DBSet<U> select(Select<T,U> x)
,where Select is the following SAM type:
public static interface Select<U, V> {
public V select(U val);
}
with an actual argument of the kind:
#(c){c.getName()}
So, given that there's no explicit type on 'c' and given that the method
'select' is accepting *any* subtype of SAM<T,U> for any U, I'm having
hard times in imagining an inference scheme that can infer something
meaningful (not Object) for 'c'.
Of course if you throw the expected type into the picture, the compiler
can figure out that U is String and not Object. But in your example, the
one with the chained calls, there's no expected type (because the
expected type refers to the chained call to 'sortedByStringAscending').
Which means the compiler has two options here: (i) issue an error
because the inferencer doesn't have enough info to infer a type for c or
(ii) try to infer a 'default' type for c. We currently have choosed (ii)
since this is the way Java method type-inference work *already* - which
means we think it's less disruptive to average Java programmers.
Example:
class Foo<X> {
X x;
static <Z> Foo<Z> make() { return new Foo<Z>(); }
X getX() { return x; }
}
String s = Foo.make().getX();
The above fails to compile in a way that is similar to your original
example: the call to Foo.make infers Z as Object, which means the
subsequent call to getX() returns Object that is not compatible with String.
Maurizio
More information about the lambda-dev
mailing list