Overload resolution simplification
Maurizio Cimadamore
maurizio.cimadamore at oracle.com
Fri Aug 16 03:16:21 PDT 2013
On 16/08/13 00:36, Ali Ebrahimi wrote:
> I think this can also happen in non-lambda world.
> Can you show some example that how this may happen for comparing?
I don't think you even need comparing to get to that; if your strategy
is: type-check the lambda against each possible target (hence, multiple
times) and reject each target whenever the lambda cannot be type-checked
against that target, it means that overload resolution depends on errors
in the body of the lambda. Example:
interface Function<X, Y> {
Y m(X x)
}
m(Function<String, Integer> f)
m(Function<Integer, Integer> f)
m(x->x.intValue())
so, you take the lambda, try to check it using F<S, I> as a target - you
get an error, as there's no member called intValue() in String - so you
rule out the first candidate. Then you check the second method, and
everything is fine. So you pick the second - you don't even get to most
specific, as the first is not applicable.
This seems relatively straightforward - but what if the body was:
m(x-> SomeClass.g(x))
well, now the fact that the lambda can be type-checked depends on the
fact that we can resolve 'g' with a given parameter type; which
ultimately means that the 'm' method that will be picked will depend on
which methods are available in SomeClass. Add one more method there and
you can get a different behavior (as resolution for 'g' might change, as
it could have i.e. a different most specific)! Now, if we play devil's
advocate, we can argue that if some code has this statement:
SomeClass.g(someActual)
Its semantics (i.e. which method get choosen) is always going to depend
on the set of methods available on SomeClass. And, even if we do this:
f(SomeClass.g(someActual))
we are going to get different selection of f, based on g's most specific.
But there's I think, an important distiction to be made: in the lambda
case you basically don't know what the type of x is. In the other two
cases, the type of the actual argument will be well known. So, I think a
lot of the discomfort over the lambda case is coming from the fact that
the user will have to reason about _two_ processes happening in
parallel: the selection of the lambda parameter type 'x' and the
selection of the innermost overload candidate 'g'. There's an interplay
between the two process that is very subtle - one might argue that 90%
of the times user won't even have to know about that - but what about
that 10% of the times? Will the user have the right mental model to even
reason about the problem at hand?
What the compiler is asking now, basically let you fall back into a
world we know - if you want to compile a code like that, you need to put
a type on your lambda parameter; 'x' will become statically known (w/o
inference) so that all the nasty overload selection in the guts of the
lambda body will be just as nasty as before - but not worse.
Maurizio
More information about the lambda-spec-observers
mailing list