Overload resolution strategy

Dan Smith daniel.smith at oracle.com
Wed Jan 30 15:03:09 PST 2013


I'd still love to hear feedback on this.  (I know it's a lot to process.)

—Dan

On Jan 14, 2013, at 3:42 PM, Dan Smith <daniel.smith at oracle.com> wrote:

> Back in May, I wrote up some summaries of overloading design questions that we had been struggling with.  Many involved concerns over how subtle differences in "implicit" lambda expressions (implicit meaning that the types of its parameters are inferred) might cause differences in overload resolution behavior, and whether this was a good idea.
> 
> In the August EG meeting, the consensus was that we should try to be as "dumb" as is reasonable, mostly relying on things like the shape of a lambda for hints about which overload candidates to discard.  And I was left to explore what that might look like and whether it would be viable.
> 
> I've summarized the work we've done on this problem below.  This will help to understand how we arrived at Part F of the 0.6.x spec; I'm also quite interested in feedback on our conclusions.  (I know there's a lot to process here, but I'd really appreciate it if you could take the time to digest it. Overload resolution is, along with other aspects of inference, the hardest language design problem we've had to face, and the one with the fewest obvious answers.)
> 
> ---
> 
> Separately, we'd been looking at the problem of inference for combinator-like methods: methods for which the arguments provide no clues about what an implicit lambda's parameter types should be.  We expect these to be very common.  Example:
> 
> Predicate<String> p = Predicate.negate(s -> s.isEmpty());
> 
> Here, the 'negate' method is generic; the only way to know what the type argument of 'negate' should be is by looking at the assignment's target type ('Predicate<String>'); and at this point, we have to have already resolved overloading.
> 
> We type check lambda expressions in a "top-down" manner; this works even through multiple levels of lambda nesting.  But, here, that top-down approach is defeated because method invocations don't pass information top down (from their return types to their parameter types).  The solution is to change that.
> 
> ---
> 
> Basic framework:
> 
> Both of these problems suggest that there may be times when we want to leave an implicit lambda expression untyped until after overload resolution is done.  So applicability testing will depend on only some of the arguments.  I call such cases "provisionally applicable" methods.
> 
> Step 1: Potential Applicability.  When a method has an implicit lambda expression argument that can't be typed, we can still look at the "shape" of the lambda -- its arity, whether it returns a value.  This is accomplished via enhancements to the "potentially applicable" test.  Before, we just looked at arity of the methods.  Now, we also look for functional interface parameter types of a compatible shape.  (Note that there's no type-related analysis that goes on at this stage, other than to identify functional interface types; we don't even recur on lambdas nested inside other lambdas, because that might require some inference work to decide what the nested lambda's target type will be.)
> 
> Step 2: Applicability.  Out of a pool of potentially-applicable methods, we need to determine which are applicable.  The old process (Java 7) was, essentially, to perform subtyping tests for each argument.  The new process needs to account for poly expressions by testing that the argument expression is compatible with its target type.  Some analysis is used to determine which arguments should be used for applicability testing, and which should be left untyped.  If one or more arguments is left untyped, the method is only _provisionally_ applicable.  There's a spectrum of choices for how this works -- see the next section for details.
> 
> Step 3: Most Specific.  After we identify applicable methods, we try to identify the "most specific" one.  This is based on the assumption that all candidates will work (i.e., not cause errors), so we're free to just choose whichever one seems best.  But note that when we have a provisionally-applicable method, we can't be sure whether it will really work or not—not until we've typed the untyped lambda body.  So it seems that the only reasonable thing to do when a provisionally-applicable method is involved is to skip the most-specific check.  (This implies that, when there are multiple applicable candidates, an ambiguity error usually occurs.)
> 
> Where there is a lambda expression argument, we've identified a few different conditions under which the most-specific check should prefer one functional interface type to another -- assuming the functional interfaces have the same descriptor parameter types.  Where S is the return type of one and T is the return type of the other, S is preferred to T when:
> - S <: T
> - T is void
> - S is primitive, T is reference, and all the lambda results are primitive
> - S is reference, T is primitive, and all the lambda results are reference
> - Both are functional interfaces, and S is preferred when we apply the rules recursively
> 
> We had previously handled boxing/unboxing with the strict/loose phases of applicability, but this doesn't play nicely with provisional applicability: we don't want adding explicit lambda parameters to move a method from strict-applicable to loose-applicable.  And enhancing the most specific analysis seems like a better way to fit with users' intuition about how this should work, anyway.
> 
> ---
> 
> Applicability testing:
> 
> Given a set of potentially-applicable methods, we need to decide which are applicable.  We have a spectrum of choices, falling between two extremes:
> 
> Aggressive extreme: All implicit lambdas are speculatively typed during overload resolution, for each target type (different target types may lead to different lambda parameter types); this may cause inference to "force" some variables to be resolved during applicability testing; if any errors occur in the lambda bodies, the candidate method is not applicable.
> 
> Example:
> 
> <T> List<T> m(FileFilter f, Block<T> b);
> <T> List<T> m(Predicate<String> p, Block<T> b);
> List<String> l = m(x -> x.getParent() == null, t -> System.out.println(t));
> // 1st lambda only compatible with FileFilter; 2nd lambda forced to be a Block<Object>,
> // causing a downstream assignment error
> 
> Conservative extreme: All implicit lambdas remain untyped until after overload resolution, leaving a set of provisionally-applicable methods.  If the set has more than one element, there is usually an ambiguity error.
> 
> Examples:
> 
> List<String> l = m(x -> x.getParent() == null, t -> System.out.println(t));
> // ambiguity error
> 
> interface Stream<E> {
> <T> T map(Mapper<E,T> mapper); // descriptor: E->T
> byte map(ByteMapper<E> mapper); // descriptor: E->byte
> int map(IntMapper<E> mapper); // descriptor: E->int
> long map(LongMapper<E> mapper); // descriptor: E->long
> }
> stream.map(x -> 23);
> // ambiguity error
> 
> Both of these extremes are unacceptable.  The aggressive extreme fails to handle cases like 'negate'.  The conservative extreme fails to handle some common overloading patterns like 'map'.
> 
> We settled on two points a few steps inside of these extremes.
> 
> Plan A: When the parameter types of an implicit lambda can't yet be inferred*, the lambda is left untyped and the method is provisionally applicable; otherwise, we speculatively type the lambda, allowing any errors (except exception checks -- see next section) in lambda bodies to impact applicability. (*Still experimenting with how to define "can't yet be inferred", but it should include cases like 'negate'.)
> 
> Earlier, we had been considering type checking only part of a block lambda body -- the expressions appearing after 'return ...' -- but decided that was too hard to specify and explain, and too brittle.  So this considers the whole body, including, e.g., any access of the lambda parameters' fields or methods, in order to find errors that would make the method inapplicable.
> 
> Plan B: An implicit lambda is typed if all potentially-applicable methods share the same parameter types; otherwise, it remains untyped and the method is provisionally applicable.  (For example, for the above 'map' method, the parameter type is always 'E'.)
> 
> The difference between the two plans, essentially, is the conditions under which a lambda is left untyped.  Plan B has more untyped lambdas (a superset): it adds an extra check between potential applicability testing and full applicability testing that will mark lambdas as untyped if they have inconsistent target parameter type lists.
> 
> We explored both of these at length, and chose Plan A, but it's important to understand the trade-offs.
> 
> Problems with Plan A:
> 
> Plan A is essentially what we've had in mind for most of the evolution of the project, so why reconsider it?  (Some of this repeats discussion from the August EG meeting...)
> 
> - Heavier demands on type-checker implementations.  When encountering mixed target parameter types, Plan B conservatively says "I'm not sure how to type this lambda yet so I'll skip it," while Plan A says "let's find all possible typings for this lambda."  The javac engine supports speculative typing, and we're happy with it, but it would be nice not to bake that requirement into the language.
> 
> - Theoretical complexity.  When overloaded methods & lambdas are deeply nested (unlikely in typical programs), type checking has an exponential-time cost (exponent being the nesting depth).  C# has this property, and it actually caused them practical problems down the road when LINQ was implemented via rewrites to nested lambdas:
> http://blogs.msdn.com/b/ericlippert/archive/2007/03/26/lambda-expressions-vs-anonymous-methods-part-four.aspx
> 
> - Hard to interpret programs.  When there are mixed target parameter types, users need to think in terms of expressions in lambda bodies having multiple types ("if 'x' is a String, then this means ...; if 'x' is a Widget, then this means ...").  If lambdas are nested, there's a cross-product effect that makes this even worse.
> 
> - Brittle when refactoring.  Changing a lambda body in seemingly innocent ways might have unexpected overloading side-effects: maybe the overloading outcome depends on a particular invocation ('param.getLeftToe()'); changing or removing that expression might change which types for 'param' are considered acceptable.
> 
> - Unhelpful failure mode.  If something goes wrong as described by one of the above concerns, the user won't get immediate feedback.  Instead, overload resolution simply chooses an interpretation for the lambda, and any problems will manifest themselves downstream, during type checking or even at runtime.
> 
> It's worth emphasizing that all of these concerns deal with corner cases: to get something bad to happen, you need a pair of overloaded methods that give the lambda parameters different (but similar) types, and that have different return types or behavior.  So "in the wild" experience with these issues is not likely to be common.  The concern is mostly about having a clean, easy-to-understand model, and about avoiding rare "I have no idea what the compiler is doing" moments.
> 
> Problems with Plan B:
> 
> - Does not play well with generic methods.  If type argument inference is a prerequisite to determining the parameter types of the lambda, we probably have to assume that the unresolved inference variables represent different types, even though they may ultimately be inferred to be the same thing.  Example:
> 
> static <E1,T> T map(Stream<E1> stream, Mapper<E1,T> mapper); // descriptor: E1 -> T
> static <E2> byte map(Stream<E2> stream, ByteMapper<E2> mapper); // descriptor: E2 -> byte
> ...
> 
> (Here I've "flattened" the above 'map' instance methods into static methods.)
> 
> While it may be "obvious" that the Mapper and ByteMapper always have the same parameter types, it seems really hard to define that "obviously the same" intuition accurately.
> 
> So sets of generic (usually static) methods like this would probably need to be defined with different method names.  The availability of default methods makes this a much more manageable concern, but it's still a concern.
> 
> - It's an unproven idea that might not provide enough power.  We spent a good deal of effort trying to identify a common overloading pattern that requires multiple methods that accept functions with different parameter types.  Surprisingly, the "same parameter types" intuition seems to pretty effectively cover the way people actually overload function-accepting methods (based on an exploration of LINQ, Scala collections, and a few other APIs).  Unfortunately, demonstrating the viability of Plan B is, loosely, a semi-decidable problem: it may be possible to produce a counter-example, but until we find one, we're stuck with "I don't know" as an answer.
> 
> So there's a risk that we release Java 8 with Plan B and then find a lot of people complaining that overload resolution is giving them ambiguity errors when there's "clearly" no ambiguity.  That could be addressed by adding power in Java 9, but our hands would be tied in the mean time.
> 
> - Adds some complexity to the "applicability" concept.  In order to understand whether a lambda argument will influence applicability or not, you've got to collect all the potentially-applicable methods and decide whether they have the same parameter types.  That's another layer inserted into overload resolution.
> 
> Summary:
> 
> Plan B offers a sensible way to avoid a lot of tricky problems with Plan A, but does so by adopting the risk that it won't be powerful enough.  Plan B can be viewed as adding a guardrail to keep people out of nasty corner cases; the risk is that the guardrail will encroach on the highway.  In our estimation, the risk is too great, and so we prefer to accept the theoretical problems and quirks of Plan A.
> 
> That said, I'm curious to hear opinions on how we've weighed the various concerns.
> 
> ---
> 
> Exception checking:
> 
> There's a general intuition that exceptions being thrown in lambda bodies should not impact overload resolution, since it's such a subtle property of the code.  (This discussion applies to lambda with either implicit or explicit parameter types.)
> 
> More fundamentally, exception checking may depend on the results of inference, which we don't get until after overload resolution is done:
> 
> <T extends Exception> T foo() throws T;
> 
> interface StringFuction<T> { T f(String s) throws IOException; }
> 
> <T> List<T> m(StringFunction<T> f);
> 
> List<IOException> l = m((String s) -> foo()); // 'foo' throws IOException, which is fine
> List<SQLException> l = m((String s) -> foo()); // 'foo' throws SQLException, which is an error
> 
> Another exception check ensures that a 'catch' clause does not try to catch an exception that is never thrown.  This, too, cannot be checked before inference is done:
> 
> List<IOException> l = m((String s) -> {
> try { return foo(); }
> catch (IOException e) { } // okay, 'foo' throws IOException
> );
> 
> List<SQLException> l = m((String s) -> {
> try { return foo(); }
> catch (IOException e) { } // error, 'foo' doesn't throw IOException
> );
> 
> Conclusion: exception-checking errors must be special-cased, excluded from the definition of lambda compatibility and method applicability.



More information about the lambda-spec-experts mailing list