Overload resolution simplification

Ali Ebrahimi ali.ebrahimi1781 at gmail.com
Mon Aug 19 04:20:51 PDT 2013


Hi Maurizio,
What do you mean by complexity: complexity for user or complexity compiler
expressiveness and ease of use essentially user side concept and current
state hurts them.
if your intend is user side complexity, I think at least for comparing
case, mandating lambda parameter types for programmer does not affect
complexity and does not change selected overload. Only way for directing
compiler to select specific overload is Casting lambda to some SAM type.
If your intend is compiler side complexity base on your previous reply
problem already solved.

Best Regards,
Ali ebrahimi

On Mon, Aug 19, 2013 at 3:01 PM, Maurizio Cimadamore <
maurizio.cimadamore at oracle.com> wrote:

>  I think you are essentially describing a subset of what the compiler
> used to do. In fact, I believe your example used to work.
> As it's been said in this thread - it's not a technical problem; we
> already have the infrastructure to do all flavours of type-checking :-)
> Finding the right balance between expressiveness/complexity/ease of use is
> the challenge here.
>
> Maurizio
>
>
> On 17/08/13 00:25, Ali Ebrahimi wrote:
>
> Hi,
>
>
> On Fri, Aug 16, 2013 at 2:46 PM, Maurizio Cimadamore <
> maurizio.cimadamore at oracle.com> wrote:
>
>> On 16/08/13 00:36, Ali Ebrahimi wrote:
>>
>>> I think this can also happen in non-lambda world.
>>> Can you show some example that how this may happen for comparing?
>>>
>>  I don't think you even need comparing to get to that; if your strategy
>> is: type-check the lambda against each possible target (hence, multiple
>> times) and reject each target whenever the lambda cannot be type-checked
>> against that target, it means that overload resolution depends on errors in
>> the body of the lambda. Example:
>>
>> interface Function<X, Y> {
>>    Y m(X x)
>> }
>>
>> m(Function<String, Integer> f)
>> m(Function<Integer, Integer> f)
>>
> One moment I thought we are in future. I see that is typo
> .
>
>>
>> m(x->x.intValue())
>>
>> so, you take the lambda, try to check it using F<S, I> as a target - you
>> get an error, as there's no member called intValue() in String - so you
>> rule out the first candidate. Then you check the second method, and
>> everything is fine. So you pick the second - you don't even get to most
>> specific, as the first is not applicable.
>>
>> This seems relatively straightforward - but what if the body was:
>>
>> m(x-> SomeClass.g(x))
>>
>> well, now the fact that the lambda can be type-checked depends on the
>> fact that we can resolve 'g' with a given parameter type; which ultimately
>> means that the 'm' method that will be picked will depend on which methods
>> are available in SomeClass.
>
> I don't get this. First, we are in Type Inference phase and we do lambda
> body type checking for each m's overload independently, So during this
> process we already chosen m. So result of type checking only verify the
> chosen m.
>  if type inference process for m's overloads only find one correct
> answer, So go ahead.
>
>
>> Add one more method there and you can get a different behavior (as
>> resolution for 'g' might change, as it could have i.e. a different most
>> specific)! Now, if we play devil's advocate, we can argue that if some code
>> has this statement:
>>
>> SomeClass.g(someActual)
>>
>> Its semantics (i.e. which method get choosen) is always going to depend
>> on the set of methods available on SomeClass. And, even if we do this:
>>
>> f(SomeClass.g(someActual))
>>
>> we are going to get different selection of f, based on g's most specific.
>>
> This is my point. So this problem does not limits to lambdas. We already
> had it.
>
>>
>> But there's I think, an important distiction to be made: in the lambda
>> case you basically don't know what the type of x is. In the other two
>> cases, the type of the actual argument will be well known. So, I think a
>> lot of the discomfort over the lambda case is coming from the fact that the
>> user will have to reason about _two_ processes happening in parallel: the
>> selection of the lambda parameter type 'x' and the selection of the
>> innermost overload candidate 'g'.
>
> I say this again, in my proposed scheme we do type inference for each
> overload independently. first, for x we have inferred type from inference
> context for current overload and second, selection of the innermost
> overload candidate 'g' does not change current in process overload. we only
> type check lambda body for current overload to infere lambda return type.
> Then if type checking was successful and we inferred lambda's return type,
> we verify that with expected type based on current overload.
>  consider this example:
>  suppose we have:
>  class Person{
>    private String name;
>    private int age;
>
>  //getters & setters
> ...
>  }
>
>  Stream<Person> persons = ...;
>
>  persons.sorted(comparing(x -> x.getAge()));
>
>  we have 4 overload for comparing:
>  for each overload we run inference process separately.
>  suppose we do this process for
>  <T, U extends Comparable<? super U>> comparing(
>             Function<? super T, ? extends U> keyExtractor)
>
>  in first step we try to infer comparing's type arguments from inference
> context. As I said before for type checking lambda bodies we only needs its
> input type arguments to be inferred. In this case we only need to type args
> T. As you can see in this case compiler can infer type Person for
> comparing's type args T. So we can inject this type to lambda parameter and
> start lambda body type checking. So in fact, now  we have this.
> persons.sorted(Comparator.<Person>comparing((Person x) -> x.getAge()));
>  lambda body is only one expression x.getAge() as you can see x already
> known and its type is Person and person has member getAge(), So type
> checking is successful. But, this is not end of story, since we should
> infer lambda return type.
>
>  Suppose we are in mode 1:
>  So inferred return type for lambda based on expression x.getAge() would
> be int.
>
> After inferring (computing) return type we verify result (int) with
> expected type based on current overload.
>  The expected type is U extends Comparable<? super U>
>
>  As I said before we do lambda return type verification in 3 phase or
> mode:
>  1) without boxing, unboxing and widening (can not do boxing, unboxing
> and widening)
> 2) without boxing and unboxing but with widening (can not do boxing,
> unboxing but can do widenin)
> 3) with boxing, unboxing and widening (can do boxing, unboxing and
> widening)
>
>  For mode 1 verification fails. int is not reference type and does no
> satisfy  U extends Comparable<? super U>
> So we give up this overload in mode 1.
>
>  For mode 2 verification also fails base on above reasoning in mode 1.
> So we give up this overload in mode 2.
>
> For mode 3 (with boxing) verification does not fail. Since Integer satisfy
> constraint U extends Comparable<? super U>
> So we add this overload in mode 3 applicable overloads list.
>
>  We do this process for 3 other overload.
>
> <T> Comparator<T> comparing(ToIntFunction<? super T> keyExtractor)
>
>  Inferred T would be Person for this overload also. So lambda body type
> checking would be
>  successful.
>  So let's go final step: lambda return type verification
> In this case expected type is int.
> So for mode 1 verification is fine. int == int
> we add this overload in mode 1 applicable overloads list.
>
>  So for mode 2 verification is fine. int == int , does not need to
> widening
> we add this overload in mode 2 applicable overloads list.
>
> So for mode 3 verification is fine. int == int, does not need to boxing,
> unboxing and widening
> we add this overload in mode 3 applicable overloads list.
>
>  <T> Comparator<T> comparing(ToLongFunction<? super T> keyExtractor)
> In this case expected type is long.
>
> So for mode 1 verification fails. int <> long
> So we give up this overload in mode 1.
>
>  for mode 2 verification is fine. ((long)int) == long, can do widening
> we add this overload in mode 2 applicable overloads list.
>
> for mode 3 verification is fine. ((long)int) == long, can do widening
> we add this overload in mode 3 applicable overloads list.
>
>
> Comparator<T> comparing(ToDoubleFunction<? super T> keyExtractor)
> In this case expected type is double.
>
> So for mode 1 verification fails. int <> double
> So we give up this overload in mode 1.
>
>  for mode 2 verification is fine. ((double)int) == double, can do widening
> we add this overload in mode 2 applicable overloads list.
>
> for mode 3 verification is fine. ((double)int) == double, can do widening
> we add this overload in mode 3 applicable overloads list.
>
>  Finally, we process obtained results for all overloads.
>  As you can see in mode 1's applicable overloads list exists only one
> entry. So inference is successful.
>
>  if mode 1's applicable overloads list had more than one entry, then
> inference fails and ERROR.
> if mode 1's applicable overloads list was empty check mode 2 applicable
> overloads list based on above process. After mode 2 go mode 3 if mode 2's
> applicable overloads list was empty.
>
>  So best on my proposed scheme
> <T> Comparator<T> comparing(ToIntFunction<? super T> keyExtractor)
>  is best and only applicable overload.
>
>  There's an interplay between the two process that is very subtle - one
>> might argue that 90% of the times user won't even have to know about that -
>> but what about that 10% of the times? Will the user have the right mental
>> model to even reason about the problem at hand?
>>
> I don't think so at list in real world code.  In fact I imagine reverse
> would be more correct.
>  Consider comparing case.
>
>
>  Regards,
>  Ali Ebrahimi
>
>
>


More information about the lambda-spec-observers mailing list