From daniel.smith at oracle.com Wed Jan 2 15:14:28 2013 From: daniel.smith at oracle.com (Dan Smith) Date: Wed, 2 Jan 2013 16:14:28 -0700 Subject: JSR 335 Lambda Specification, 0.6.1 Message-ID: <01B0FD7D-782B-4443-84FA-599598BEEBC7@oracle.com> An updated specification can be found here: http://cr.openjdk.java.net/~dlsmith/jsr335-0.6.1/ Other links Diff: http://cr.openjdk.java.net/~dlsmith/jsr335-0.6.1-diff.html One-page HTML: http://cr.openjdk.java.net/~dlsmith/jsr335-0.6.1/full.html Downloadable zip: http://cr.openjdk.java.net/~dlsmith/jsr335-0.6.1.zip The bulk of the changes are some big improvements to Type Inference (Part G) and Java Virtual Machine (Part J). These areas are still under active discussion, so further changes and refinements will be coming. Barring any objection, I'd like to post this on the JCP site as an EDR. ?Dan From forax at univ-mlv.fr Mon Jan 7 04:47:15 2013 From: forax at univ-mlv.fr (Remi Forax) Date: Mon, 07 Jan 2013 13:47:15 +0100 Subject: [jsr-292-eg] Conversion of method reference to MethodHandle In-Reply-To: <50E1ACAC.8030806@univ-mlv.fr> References: <50E0DF61.3060608@univ-mlv.fr> <50E0E3CA.4020005@oracle.com> <50E1ACAC.8030806@univ-mlv.fr> Message-ID: <50EAC3D3.8030402@univ-mlv.fr> ping ... On 12/31/2012 04:18 PM, Remi Forax wrote: > On 12/31/2012 02:00 AM, Brian Goetz wrote: >> To start with, the syntax that has been chosen for method references >> is inadequate for method handles. This is because of the point you >> raise -- there is no way to specify the type of the parameters, and >> therefore no reasonable way (creating a customized functional >> interface for each usage is not reasonable) to select between >> multiple overloads of the same name. We have discussed explicit type >> parameters, but they turned out (or so we still think, jury is still >> out) that not adding this bit of syntax is adequate to what we are >> doing here. >> >> Failing completely in the presence of overloading makes this a >> crippled feature with even more sharp edges than it naturally has. > > I see that as a feature. > People tend to use overloading where they should not. List::remove is > a good example, JPanel#add is another one. > The only case where it's correctly used is PrintStream#println, > StringBuilder#append or Math::sqrt because all overloads have the same > semantics. > > More fundamentally method reference does an early binding, so even a > method reference with a functional interface is broken because the > overload resolution should appear when calling it and not when > creating it. > > The simple workaround is to create a bridge method, if you can't > change the class to not use overloading. > private static void myprintln(PrintStream stream, int value) { > return stream.println(value); > } > >> But that's not even the real objection, > > cool ! > >> nor is this: >> >>> And because Brian will say that this is interesting but Oracle has >>> no resource that can work on this, >>> here is the patch for the compiler* part based on lambda/langtools : >>> http://www-igm.univ-mlv.fr/~forax/tmp/method-ref-method-handle-langtools.diff >>> and the patch for the jdk part based on lambda/jdk: >>> http://www-igm.univ-mlv.fr/~forax/tmp/method-ref-method-handle-jdk.diff >> >> My true objection is deeper than any of these. After thinking about >> it for a long time, I think this feature just doesn't belong. The >> set of people who need method handle literals is a tiny sliver of the >> Java developer base; complicating "everyone's Java" for the sake of a >> few advanced users (100% of whom are qualified to generate bytecode) >> is a bad stewardship choice. > > There are two points here: > 1) the set of people interested by this feature is tiny. > 2) it complicates Java for everybody > > Point 1. > Perhaps, the set of people is actually not that big, it's the people > that write language runtimes i.e. people that really need to use > method handles. > But there is a bigger set of people, the ones that will use that > feature, i.e. all the people that actually use java.lang.reflect are > people that will use that feature. > The fact that there is no method reference literal make things like > creating bean properties, interceptors, proxy etc, more painful that > it should. > So yes the actual set of people is not big, but the people that will > use it is far bigger. > > Point 2. > It's just not true, it's the beauty of target typing. The syntax > already exists, the semantics is simple. The overhead is really tiny. > You can compare it to the introduction of the hexadecimal syntax for > double in Java 5. > The hexadecimal syntax already exists, after Java 5, the syntax can > used to specified double, not something really complicated. > >> >> I'm one of those few who has to resort to writing ASM programs to use >> MHs when I want them. So I know exactly how painful it is. And >> still, I'd rather impose that pain on myself and my peers than >> increase the complexity of Java for everyone for the benefit of a >> small few. > > see point 1. > >> >> The way I solved this for myself is I wrote a tool which >> opportunistically mangles calls to Lookup.findXxx (with constant >> arguments) into LDCs using bytecode mangling. From a >> programming-model perspective, this is only slightly more painful >> than what you propose -- and works more generally. (It also works >> for indy.) I much prefer an approach like that. > > I've also written this kind of codes, John Rose did too and every > people that have wanted to write JUnit Test using method handles have > written something similar. > But focusing only on actual users, people that are forced to use > MethodHandle seems very wrong to me. > > R?mi > >> >> On 12/30/2012 7:42 PM, Remi Forax wrote: >>> [to lambda spec list and JSR292 EG list] >>> >>> I want to close one problem that we (the JSR 292 EG) have postponed to >>> Java 8 when we were developing the JSR 292, >>> a Java syntax to reference a method or a constructor as a method >>> handle. >>> This item was postponed for two reasons, the first one is that the JSR >>> 292 was a JSR allowing to change the VM spec and not the Java spec and >>> the second one was that at that time the syntax for method reference >>> was >>> in limbo and we didn't want to choose a syntax that may be incompatible >>> with the upcoming lambda spec. >>> >>> Now that the syntax part of the lambda spec is frozen, we can solve >>> that >>> issue. >>> >>> Just before starting, I want to briefly explain why this syntax is >>> needed. >>> A runtime of a dynamic languages uses several well known function >>> (usually specified as static method) implementing either part of the >>> runtime logic like a function that check is the runtime class of an >>> object is equals to a specified class or some global functions of the >>> language it self (here is the file for JavaScript in Nashorn [1]). >>> The main issue is that to be used in JSR 292 ecosystem, this functions >>> must be handled by method handles and these method handles are defined >>> are static final fields, >>> because Java doesn't initialize these fields lazily, this greatly >>> impacts the startup time of any dynamic runtime that runs on the JVM. >>> >>> So the goals of the syntax that convert a method reference to a method >>> handle is: >>> - to move some runtime checks at compile time because creating a >>> method handle involve using a string and a signature. >>> - to lazily create these method handle when needed (like the >>> lambda). >>> >>> The proposed syntax is to use the same syntax as the method reference, >>> so no new syntax. >>> MethodHandle mh = String::length; >>> the target type is a j.l.i.MethodHandle instead of being a functional >>> interface >>> >>> For the semantics: >>> Unlike the method reference with a functional interface, only >>> unbunded method reference are allowed, so by example, >>> a method reference starting with an expression or super are not >>> valid. >>> Moreover, because there is no type information that the compiler can >>> extract from a MethodHandle, the semantics in case of several >>> overloading methods, i.e. several applicable methods is simple, it's an >>> ambiguity and is rejected by the compiler. >>> More formally, a method reference can be converted to a >>> j.l.i.MethodHandle, >>> - only if it's form is >>> Type::name, with Type an existing type and name a valid Java >>> identifier or 'new'. >>> - then from the type, the compiler gather all accessible methods >>> declared in the type type or it's super type (class or interfaces), >>> if a method is override-equivalent to another one, the one from >>> the supertype is discarded. >>> if the name is 'new', the compiler gather all constructors of >>> the >>> type type. >>> if the type is an array, all public methods of j.l;Object are >>> available plus clone() >>> - if there are more than one method in the set of gathered method, >>> the reference is ambiguous. >>> - if there is no method, the reference is not available >>> - if there is one method, a method handle will be created from >>> this >>> method. >>> >>> Here is some examples: >>> class Test { >>> public static void foo() { } >>> } >>> ... >>> MethodHandle mh1 = Test::foo; // foo()V >>> mh1.invokeExact(); >>> >>> class Test >>> public void bar() { } >>> } >>> ... >>> MethodHandle mh2 = Test::bar; // bar(Test)V >>> mh2.invokeExact(new Test()); >>> >>> class Test { >>> interface I { >>> public void m(); >>> } >>> static class A implements I { >>> public void m() { } >>> } >>> } >>> ... >>> MethodHandle mh3 = I::m; // m(I)V >>> mh3.invokeExact((I)new A()); >>> >>> class Test { >>> static class B { >>> public B(int value) { } >>> } >>> } >>> } >>> ... >>> MethodHandle mh4 = B::new; // B(int) >>> B b = (B)mh4.invokeExact(3); >>> >>> class Test { >>> class C { // inner class >>> public C() { } >>> } >>> } >>> ... >>> MethodHandle mh5 = C::new; // C(Test) >>> C c = (C)mh5.invokeExact(new Test()); >>> >>> class Test { >>> static class D { >>> Object foo() { } >>> } >>> static class E { >>> String foo() { } // covariant return type >>> } >>> } >>> ... >>> MethodHandle mh8 = E::foo; // foo(E)String >>> String s3 = (String)mh8.invokeExact(new E()); >>> >>> class Test { >>> static class F { >>> private static void m() { } // non visible method for the VM, ok >>> in Java, so the compiler has to generate a bridge >>> } >>> } >>> ... >>> MethodHandle mh9 = F::m; >>> mh9.invoke(); >>> >>> MethodHandle mh10 = String[]::clone; // String[].clone returns >>> Object, so it needs a bridge by the compiler >>> String[] values = (String[])mh10.invoke(args); >>> >>> As you can see the syntax already exist, the semantics of the >>> attribution (find the correct method) is simpler than the method call >>> semantics (there is no inference) and after the attribution, the rules >>> for the generation, bridge, etc. are the same as the one for the method >>> reference with a functional interface as target. >>> >>> The compiler generates like the method reference an invokedynamic >>> with a >>> different bootstrap method that takes the constant method handle as >>> parameter and create a constant callsite with this constant method >>> handle (so at runtime a method reference seen as a method handle is >>> always constant). >>> >>> And because Brian will say that this is interesting but Oracle has no >>> resource that can work on this, >>> here is the patch for the compiler* part based on lambda/langtools : >>> http://www-igm.univ-mlv.fr/~forax/tmp/method-ref-method-handle-langtools.diff >>> >>> >>> and the patch for the jdk part based on lambda/jdk: >>> http://www-igm.univ-mlv.fr/~forax/tmp/method-ref-method-handle-jdk.diff >>> >>> cheers, >>> R?mi >>> * note, the mh9 doesn't work at runtime because it also doesn't work >>> for >>> method reference with a functional interface. >>> >>> [1] >>> http://hg.openjdk.java.net/nashorn/jdk8/nashorn/file/b4b05457b8b2/src/jdk/nashorn/internal/runtime/GlobalFunctions.java >>> >>> >>> > From daniel.smith at oracle.com Tue Jan 8 12:47:20 2013 From: daniel.smith at oracle.com (Dan Smith) Date: Tue, 8 Jan 2013 13:47:20 -0700 Subject: 'synchronized' interface methods Message-ID: <63E3368F-18C8-4364-89BB-0B01FB507994@oracle.com> One of the changes in the 0.6.1 draft was to make 'synchronized' interface methods illegal. (This applies to default methods and static/private methods in interfaces.) Here are the main motivations for the restriction: 1) 'synchronized' is primarily for controlling access to fields. Interfaces have no (instance) fields. 2) There's some risk that VMs won't be naturally equipped to handle these methods, and some extra work will be necessary. Since we're not actively intending to support this combination of features, we'd really prefer that it not add to the VM implementation burden. 3) Interfaces allow multiple inheritance -- mixing behavior from different sources. Since different code may use locking for different purposes, it's dangerous to allow two separate code bodies to be merged via inheritance and end up sharing a single lock object. #3 is the most compelling to me. It makes sense to say that classes, among the many special privileges and responsibilities they have due to single inheritance, are tasked with managing locking on 'this'. (Of course, this is only a soft guarantee, since the locking methods of all objects are public.) ?Dan From forax at univ-mlv.fr Tue Jan 8 13:47:39 2013 From: forax at univ-mlv.fr (Remi Forax) Date: Tue, 08 Jan 2013 22:47:39 +0100 Subject: 'synchronized' interface methods In-Reply-To: <63E3368F-18C8-4364-89BB-0B01FB507994@oracle.com> References: <63E3368F-18C8-4364-89BB-0B01FB507994@oracle.com> Message-ID: <50EC93FB.3010303@univ-mlv.fr> On 01/08/2013 09:47 PM, Dan Smith wrote: > One of the changes in the 0.6.1 draft was to make 'synchronized' interface methods illegal. (This applies to default methods and static/private methods in interfaces.) Here are the main motivations for the restriction: > > 1) 'synchronized' is primarily for controlling access to fields. Interfaces have no (instance) fields. > > 2) There's some risk that VMs won't be naturally equipped to handle these methods, and some extra work will be necessary. Since we're not actively intending to support this combination of features, we'd really prefer that it not add to the VM implementation burden. > > 3) Interfaces allow multiple inheritance -- mixing behavior from different sources. Since different code may use locking for different purposes, it's dangerous to allow two separate code bodies to be merged via inheritance and end up sharing a single lock object. > > 3) is the most compelling to me. It makes sense to say that classes, among the many special privileges and responsibilities they have due to single inheritance, are tasked with managing locking on 'this'. (Of course, this is only a soft guarantee, since the locking methods of all objects are public.) > > ?Dan 0) 'synchronized' as a keyword is stupid because it's an implementation detail that leak outside, synchronizing on 'this' is stupid because 'this' is too public, synchronizing on a Class (for static method) is worst. R?mi From forax at univ-mlv.fr Wed Jan 9 09:41:07 2013 From: forax at univ-mlv.fr (Remi Forax) Date: Wed, 09 Jan 2013 18:41:07 +0100 Subject: Loss of conciseness due to overload ambiguity In-Reply-To: <50ED9E9E.5070509@oracle.com> References: <50ED9E9E.5070509@oracle.com> Message-ID: <50EDABB3.9080706@univ-mlv.fr> From lambda-dev, This is typically a case where you can see that transforming a method reference to an implicit lambda when doing inference is a bad idea. Instead of trying to tweak the algorithm if there is one applicable method, why not recognizing that this is done in the wrong way. I should go that way, gather all applicable method reference i.e all method with a matching name whenever the number of parameters, then for each of them, use the signature of the method (e.g. Person -> String) to find the lambda descriptor (it's equivalent to transform it to an *explicitly typed* lambda and to have infered that the result type is the return type of the method) then if more than one method is applicable, try to select the best one. R?mi On 01/09/2013 05:45 PM, Maurizio Cimadamore wrote: > We are considering ways to mitigate this; if the method reference is > unambiguous (only one match) there are facts that can be used to achieve > more overload resolution precision (pretty much what we do with implicit > lambdas - where we use arity to prune unwanted candidates). > > Maurizio > > On 09/01/13 17:07, Venkat Subramaniam wrote: >> Greetings, >> >> Is there a way to gain the desired conciseness in the following case? >> >> List people = Arrays.asList( >> new Person("Kate", 10), >> new Person("Jack", 10) >> ); >> >> Function byName = person -> person.getName(); >> >> people.stream().sorted(comparing(byName)).into(new ArrayList<>()); >> //[Jack - 10, Kate - 10] >> >> people.stream().sorted(comparing(Person::getName)).into(new ArrayList<>()); >> >> /* >> error: reference to comparing is ambiguous >> people.stream().sorted(comparing(Person::getName)).into(new ArrayList<>()); >> ^ >> both method comparing(IntFunction) in Comparators and method comparing(Function) in Comparators match >> where T#1,T#2,U are type-variables: >> T#1 extends Object declared in method comparing(IntFunction) >> T#2 extends Object declared in method comparing(Function) >> U extends Comparable declared in method comparing(Function) >> 1 error >> >> */ >> >> Thanks, >> >> Venkat >> > From daniel.smith at oracle.com Mon Jan 14 14:42:39 2013 From: daniel.smith at oracle.com (Dan Smith) Date: Mon, 14 Jan 2013 15:42:39 -0700 Subject: Overload resolution strategy Message-ID: <7A9D9C7E-336B-44F4-8F9B-49AE3649E1F7@oracle.com> Back in May, I wrote up some summaries of overloading design questions that we had been struggling with. Many involved concerns over how subtle differences in "implicit" lambda expressions (implicit meaning that the types of its parameters are inferred) might cause differences in overload resolution behavior, and whether this was a good idea. In the August EG meeting, the consensus was that we should try to be as "dumb" as is reasonable, mostly relying on things like the shape of a lambda for hints about which overload candidates to discard. And I was left to explore what that might look like and whether it would be viable. I've summarized the work we've done on this problem below. This will help to understand how we arrived at Part F of the 0.6.x spec; I'm also quite interested in feedback on our conclusions. (I know there's a lot to process here, but I'd really appreciate it if you could take the time to digest it. Overload resolution is, along with other aspects of inference, the hardest language design problem we've had to face, and the one with the fewest obvious answers.) --- Separately, we'd been looking at the problem of inference for combinator-like methods: methods for which the arguments provide no clues about what an implicit lambda's parameter types should be. We expect these to be very common. Example: Predicate p = Predicate.negate(s -> s.isEmpty()); Here, the 'negate' method is generic; the only way to know what the type argument of 'negate' should be is by looking at the assignment's target type ('Predicate'); and at this point, we have to have already resolved overloading. We type check lambda expressions in a "top-down" manner; this works even through multiple levels of lambda nesting. But, here, that top-down approach is defeated because method invocations don't pass information top down (from their return types to their parameter types). The solution is to change that. --- Basic framework: Both of these problems suggest that there may be times when we want to leave an implicit lambda expression untyped until after overload resolution is done. So applicability testing will depend on only some of the arguments. I call such cases "provisionally applicable" methods. Step 1: Potential Applicability. When a method has an implicit lambda expression argument that can't be typed, we can still look at the "shape" of the lambda -- its arity, whether it returns a value. This is accomplished via enhancements to the "potentially applicable" test. Before, we just looked at arity of the methods. Now, we also look for functional interface parameter types of a compatible shape. (Note that there's no type-related analysis that goes on at this stage, other than to identify functional interface types; we don't even recur on lambdas nested inside other lambdas, because that might require some inference work to decide what the nested lambda's target type will be.) Step 2: Applicability. Out of a pool of potentially-applicable methods, we need to determine which are applicable. The old process (Java 7) was, essentially, to perform subtyping tests for each argument. The new process needs to account for poly expressions by testing that the argument expression is compatible with its target type. Some analysis is used to determine which arguments should be used for applicability testing, and which should be left untyped. If one or more arguments is left untyped, the method is only _provisionally_ applicable. There's a spectrum of choices for how this works -- see the next section for details. Step 3: Most Specific. After we identify applicable methods, we try to identify the "most specific" one. This is based on the assumption that all candidates will work (i.e., not cause errors), so we're free to just choose whichever one seems best. But note that when we have a provisionally-applicable method, we can't be sure whether it will really work or not?not until we've typed the untyped lambda body. So it seems that the only reasonable thing to do when a provisionally-applicable method is involved is to skip the most-specific check. (This implies that, when there are multiple applicable candidates, an ambiguity error usually occurs.) Where there is a lambda expression argument, we've identified a few different conditions under which the most-specific check should prefer one functional interface type to another -- assuming the functional interfaces have the same descriptor parameter types. Where S is the return type of one and T is the return type of the other, S is preferred to T when: - S <: T - T is void - S is primitive, T is reference, and all the lambda results are primitive - S is reference, T is primitive, and all the lambda results are reference - Both are functional interfaces, and S is preferred when we apply the rules recursively We had previously handled boxing/unboxing with the strict/loose phases of applicability, but this doesn't play nicely with provisional applicability: we don't want adding explicit lambda parameters to move a method from strict-applicable to loose-applicable. And enhancing the most specific analysis seems like a better way to fit with users' intuition about how this should work, anyway. --- Applicability testing: Given a set of potentially-applicable methods, we need to decide which are applicable. We have a spectrum of choices, falling between two extremes: Aggressive extreme: All implicit lambdas are speculatively typed during overload resolution, for each target type (different target types may lead to different lambda parameter types); this may cause inference to "force" some variables to be resolved during applicability testing; if any errors occur in the lambda bodies, the candidate method is not applicable. Example: List m(FileFilter f, Block b); List m(Predicate p, Block b); List l = m(x -> x.getParent() == null, t -> System.out.println(t)); // 1st lambda only compatible with FileFilter; 2nd lambda forced to be a Block, // causing a downstream assignment error Conservative extreme: All implicit lambdas remain untyped until after overload resolution, leaving a set of provisionally-applicable methods. If the set has more than one element, there is usually an ambiguity error. Examples: List l = m(x -> x.getParent() == null, t -> System.out.println(t)); // ambiguity error interface Stream { T map(Mapper mapper); // descriptor: E->T byte map(ByteMapper mapper); // descriptor: E->byte int map(IntMapper mapper); // descriptor: E->int long map(LongMapper mapper); // descriptor: E->long } stream.map(x -> 23); // ambiguity error Both of these extremes are unacceptable. The aggressive extreme fails to handle cases like 'negate'. The conservative extreme fails to handle some common overloading patterns like 'map'. We settled on two points a few steps inside of these extremes. Plan A: When the parameter types of an implicit lambda can't yet be inferred*, the lambda is left untyped and the method is provisionally applicable; otherwise, we speculatively type the lambda, allowing any errors (except exception checks -- see next section) in lambda bodies to impact applicability. (*Still experimenting with how to define "can't yet be inferred", but it should include cases like 'negate'.) Earlier, we had been considering type checking only part of a block lambda body -- the expressions appearing after 'return ...' -- but decided that was too hard to specify and explain, and too brittle. So this considers the whole body, including, e.g., any access of the lambda parameters' fields or methods, in order to find errors that would make the method inapplicable. Plan B: An implicit lambda is typed if all potentially-applicable methods share the same parameter types; otherwise, it remains untyped and the method is provisionally applicable. (For example, for the above 'map' method, the parameter type is always 'E'.) The difference between the two plans, essentially, is the conditions under which a lambda is left untyped. Plan B has more untyped lambdas (a superset): it adds an extra check between potential applicability testing and full applicability testing that will mark lambdas as untyped if they have inconsistent target parameter type lists. We explored both of these at length, and chose Plan A, but it's important to understand the trade-offs. Problems with Plan A: Plan A is essentially what we've had in mind for most of the evolution of the project, so why reconsider it? (Some of this repeats discussion from the August EG meeting...) - Heavier demands on type-checker implementations. When encountering mixed target parameter types, Plan B conservatively says "I'm not sure how to type this lambda yet so I'll skip it," while Plan A says "let's find all possible typings for this lambda." The javac engine supports speculative typing, and we're happy with it, but it would be nice not to bake that requirement into the language. - Theoretical complexity. When overloaded methods & lambdas are deeply nested (unlikely in typical programs), type checking has an exponential-time cost (exponent being the nesting depth). C# has this property, and it actually caused them practical problems down the road when LINQ was implemented via rewrites to nested lambdas: http://blogs.msdn.com/b/ericlippert/archive/2007/03/26/lambda-expressions-vs-anonymous-methods-part-four.aspx - Hard to interpret programs. When there are mixed target parameter types, users need to think in terms of expressions in lambda bodies having multiple types ("if 'x' is a String, then this means ...; if 'x' is a Widget, then this means ..."). If lambdas are nested, there's a cross-product effect that makes this even worse. - Brittle when refactoring. Changing a lambda body in seemingly innocent ways might have unexpected overloading side-effects: maybe the overloading outcome depends on a particular invocation ('param.getLeftToe()'); changing or removing that expression might change which types for 'param' are considered acceptable. - Unhelpful failure mode. If something goes wrong as described by one of the above concerns, the user won't get immediate feedback. Instead, overload resolution simply chooses an interpretation for the lambda, and any problems will manifest themselves downstream, during type checking or even at runtime. It's worth emphasizing that all of these concerns deal with corner cases: to get something bad to happen, you need a pair of overloaded methods that give the lambda parameters different (but similar) types, and that have different return types or behavior. So "in the wild" experience with these issues is not likely to be common. The concern is mostly about having a clean, easy-to-understand model, and about avoiding rare "I have no idea what the compiler is doing" moments. Problems with Plan B: - Does not play well with generic methods. If type argument inference is a prerequisite to determining the parameter types of the lambda, we probably have to assume that the unresolved inference variables represent different types, even though they may ultimately be inferred to be the same thing. Example: static T map(Stream stream, Mapper mapper); // descriptor: E1 -> T static byte map(Stream stream, ByteMapper mapper); // descriptor: E2 -> byte ... (Here I've "flattened" the above 'map' instance methods into static methods.) While it may be "obvious" that the Mapper and ByteMapper always have the same parameter types, it seems really hard to define that "obviously the same" intuition accurately. So sets of generic (usually static) methods like this would probably need to be defined with different method names. The availability of default methods makes this a much more manageable concern, but it's still a concern. - It's an unproven idea that might not provide enough power. We spent a good deal of effort trying to identify a common overloading pattern that requires multiple methods that accept functions with different parameter types. Surprisingly, the "same parameter types" intuition seems to pretty effectively cover the way people actually overload function-accepting methods (based on an exploration of LINQ, Scala collections, and a few other APIs). Unfortunately, demonstrating the viability of Plan B is, loosely, a semi-decidable problem: it may be possible to produce a counter-example, but until we find one, we're stuck with "I don't know" as an answer. So there's a risk that we release Java 8 with Plan B and then find a lot of people complaining that overload resolution is giving them ambiguity errors when there's "clearly" no ambiguity. That could be addressed by adding power in Java 9, but our hands would be tied in the mean time. - Adds some complexity to the "applicability" concept. In order to understand whether a lambda argument will influence applicability or not, you've got to collect all the potentially-applicable methods and decide whether they have the same parameter types. That's another layer inserted into overload resolution. Summary: Plan B offers a sensible way to avoid a lot of tricky problems with Plan A, but does so by adopting the risk that it won't be powerful enough. Plan B can be viewed as adding a guardrail to keep people out of nasty corner cases; the risk is that the guardrail will encroach on the highway. In our estimation, the risk is too great, and so we prefer to accept the theoretical problems and quirks of Plan A. That said, I'm curious to hear opinions on how we've weighed the various concerns. --- Exception checking: There's a general intuition that exceptions being thrown in lambda bodies should not impact overload resolution, since it's such a subtle property of the code. (This discussion applies to lambda with either implicit or explicit parameter types.) More fundamentally, exception checking may depend on the results of inference, which we don't get until after overload resolution is done: T foo() throws T; interface StringFuction { T f(String s) throws IOException; } List m(StringFunction f); List l = m((String s) -> foo()); // 'foo' throws IOException, which is fine List l = m((String s) -> foo()); // 'foo' throws SQLException, which is an error Another exception check ensures that a 'catch' clause does not try to catch an exception that is never thrown. This, too, cannot be checked before inference is done: List l = m((String s) -> { try { return foo(); } catch (IOException e) { } // okay, 'foo' throws IOException ); List l = m((String s) -> { try { return foo(); } catch (IOException e) { } // error, 'foo' doesn't throw IOException ); Conclusion: exception-checking errors must be special-cased, excluded from the definition of lambda compatibility and method applicability. From daniel.smith at oracle.com Tue Jan 15 12:07:27 2013 From: daniel.smith at oracle.com (Dan Smith) Date: Tue, 15 Jan 2013 13:07:27 -0700 Subject: Overload resolution strategy In-Reply-To: <7A9D9C7E-336B-44F4-8F9B-49AE3649E1F7@oracle.com> References: <7A9D9C7E-336B-44F4-8F9B-49AE3649E1F7@oracle.com> Message-ID: <0162EF4B-05BD-4219-A713-7ED0702DA75F@oracle.com> I had an off-list question about our motivation for "provisionally applicable" methods -- why not just do all the inference first and then do overload resolution? Here was my reply. --- The problem is that we're committed to keeping overload resolution context-independent. The invocation's target type must not be considered before choosing the method resolution. Example: String m(Object arg); Object m(String arg); String s = m("hi"); // resolves to m(String); the assignment has a type error Context independence for overload resolution is an important property, because it simplifies the story that users need to grasp, and it helps to manage theoretical problems that arise from nesting. The way this interacts with type inference is in two phases (the JLS and javac have been fuzzy on this): 1) a method is applicable if it is possible to infer valid type arguments based just on the invocation arguments 2) the actual type arguments for a method are inferred from the invocation arguments _and_ invocation target type So inference happens twice; (1) may get a different answer than (2), and that's okay, because (1) is just a boolean "there exists a solution" test, while (2) produces the "real" answer. Adding lambdas to the mix, there are cases in which a lambda cannot be typed until after we look at the invocation target. So we're left with dependencies that look like: lambda argument type <- checking invocation target checking invocation target <- overload resolution overload resolution <- argument types In order to break this circle, we remove the lambda from the set of arguments that need to be typed, and call the method provisionally applicable. ?Dan From brian.goetz at oracle.com Fri Jan 18 18:54:09 2013 From: brian.goetz at oracle.com (Brian Goetz) Date: Fri, 18 Jan 2013 18:54:09 -0800 Subject: Names of desugared lambdas Message-ID: We had an action item from the last meeting, which got lost in the shuffle, to revisit the naming of methods that are desugared from lambda bodies. The current lambda$n scheme is fine for non-serializable lambdas, but at least for serializable lambdas, is too sensitive to harmless refactorings like changing the order of methods within a class file. While we know there is no perfect solution, we can choose a solution that is less obviously brittle. This brittleness is especially an issue with libraries (like the combinators for Comparator, which, to be consistent with existing Comparator implementations in the JDK, return serialiable lambdas.) What I propose is this: lambda$mmm$kkkk$nnn where mmm is the method name and kkk is the hashcode of the method signature, and nnn is a sequentially assigned number. That way, at least lambdas within a method will not have any effect on lambdas from another method. This isn't perfect but its better than what we have. From brian.goetz at oracle.com Tue Jan 29 08:06:31 2013 From: brian.goetz at oracle.com (Brian Goetz) Date: Tue, 29 Jan 2013 11:06:31 -0500 Subject: Some pullbacks Message-ID: <5107F387.1050507@oracle.com> We would like to pull back two small features from the JSR-335 feature plan: - private methods in interfaces - "package modifier" for package-private visibility The primary reason is resourcing; cutting some small and inessential features made room for deeper work on more important things like type inference (on which we've made some big improvements lately!) Private methods are also an incomplete feature; we'd like the full set of visibilities, and limiting to public/private was already a compromise based on what we thought we could get done in the timeframe we had. But it would still be a rough edge that protected/package were missing. The second feature, while trivial (though nothing is really trivial), loses a lot of justification without at least a move towards the full set of accessibilities. As it stands, it is pretty far afield of lambda, nothing else depends on it, and not doing it now does not preclude doing it later. (The only remaining connection to lambda is accelerating the death of the phrase "default visibility" to avoid confusion with default methods.) From daniel.smith at oracle.com Tue Jan 29 08:19:17 2013 From: daniel.smith at oracle.com (Dan Smith) Date: Tue, 29 Jan 2013 09:19:17 -0700 Subject: Some pullbacks In-Reply-To: <5107F387.1050507@oracle.com> References: <5107F387.1050507@oracle.com> Message-ID: <87112EC4-55E8-4D3A-9FA4-6F1FA55C6FF8@oracle.com> On Jan 29, 2013, at 9:06 AM, Brian Goetz wrote: > We would like to pull back two small features from the JSR-335 feature plan: > > - private methods in interfaces > - "package modifier" for package-private visibility > > The primary reason is resourcing; cutting some small and inessential features made room for deeper work on more important things like type inference (on which we've made some big improvements lately!) Private methods are also an incomplete feature; we'd like the full set of visibilities, and limiting to public/private was already a compromise based on what we thought we could get done in the timeframe we had. But it would still be a rough edge that protected/package were missing. To clarify (because I find a lot of people get mixed up about this): while there will be no language support for private methods in interfaces, there _will_ be VM support for private methods in interfaces. This is useful for some compiler tricks that lift things into the top level; thanks to default methods, the top level may now be an interface. There will be no VM support for package-/protected-access methods in interfaces. (This is all consistent with the 0.6.1 spec.) > The second feature, while trivial (though nothing is really trivial), loses a lot of justification without at least a move towards the full set of accessibilities. As it stands, it is pretty far afield of lambda, nothing else depends on it, and not doing it now does not preclude doing it later. (The only remaining connection to lambda is accelerating the death of the phrase "default visibility" to avoid confusion with default methods.) I'll be changing the 0.6.1 spec to remove the 'package' syntax but keep the "package access" terminology. ?Dan From paul.sandoz at oracle.com Tue Jan 29 09:12:03 2013 From: paul.sandoz at oracle.com (Paul Sandoz) Date: Tue, 29 Jan 2013 18:12:03 +0100 Subject: Encounter order Message-ID: <678009ED-739F-4E4D-B5C4-33D148A93C98@oracle.com> Hi, Below is description of encounter order for the Streams API. The implementation in the lambda repo currently conforms to this documentation although it is not implemented exactly as described. Paul. -- A source has or does not have encounter order. List and arrays are sources that have encounter order (arrays can be said to also have a spatial order). HashSet is a source that does not have encounter order (another example is PriorityQueue). A terminal operation preserves or does not preserve encounter order when producing a result. Non-preserving terminal operations are forEach, forEachUntil, findAny, match{Any, None, All}, collect(toHashSet()) and collectUnordered. An intermediate operation may inject encounter order down-stream. The sorted() operation injects encounter order when the natural comparator is used to sort elements. An intermediate operation may clear encounter order down-stream. There are no such operations implemented. (Previously the unordered() operation cleared encounter order.) Otherwise an intermediate operation must preserve encounter order if required to do so (see next paragraphs). An intermediate operation may choose to apply a different algorithm if encounter order of the elements output from the intermediate operation must be preserved or not. The distinct() operation will, when evaluating in parallel, use a ConcurrentHashMap to store unique elements if encounter order does not need to be preserved, otherwise if encounter order needs to be preserved a fold will be performed (equivalent of, in parallel, map each element to a singleton set then associatively reduce the sets to one set). An intermediate operation should preserve encounter order of the output elements if: a.1) the upstream elements input to the intermediate operation has an encounter order (either because the source has encounter order or because an upstream operation injected encounter order); and a.2) the terminal operation preserves encounter order. An intermediate operation does not need to preserve encounter order of the output elements if: b.1) the upstream elements input to the intermediate operation has no encounter order (either because the source has no encounter order or because an upstream operation cleared encounter order); or b.2) the terminal operation does not preserve encounter order *and* the intermediate operation is in a sequence of operations, to be evaluated, where the last operation in the sequence is the terminal operation and all operations in the sequence are evaluated in parallel. Rule b.2 above ensures that for the following pipeline encounter order is preserved on the sequential forEach: list.parallelStream().distinct().sequential().forEach() i.e. the distinct() operation will preserve the encounter order of the list From david.lloyd at redhat.com Wed Jan 30 07:13:45 2013 From: david.lloyd at redhat.com (David M. Lloyd) Date: Wed, 30 Jan 2013 09:13:45 -0600 Subject: Some pullbacks In-Reply-To: <5107F387.1050507@oracle.com> References: <5107F387.1050507@oracle.com> Message-ID: <510938A9.9080202@redhat.com> On 01/29/2013 10:06 AM, Brian Goetz wrote: > We would like to pull back two small features from the JSR-335 feature > plan: > > - private methods in interfaces > - "package modifier" for package-private visibility > > The primary reason is resourcing; cutting some small and inessential > features made room for deeper work on more important things like type > inference (on which we've made some big improvements lately!) Private > methods are also an incomplete feature; we'd like the full set of > visibilities, and limiting to public/private was already a compromise > based on what we thought we could get done in the timeframe we had. But > it would still be a rough edge that protected/package were missing. > > The second feature, while trivial (though nothing is really trivial), > loses a lot of justification without at least a move towards the full > set of accessibilities. As it stands, it is pretty far afield of > lambda, nothing else depends on it, and not doing it now does not > preclude doing it later. (The only remaining connection to lambda is > accelerating the death of the phrase "default visibility" to avoid > confusion with default methods.) Sounds fine to me. TBH I found the notion a bit unsettling anyway, and what if (for example) the default access level in some future JLS would change from "package" to "module"? It would be a shame to defeat such a change solely because we had previously made this decision. -- - DML From kevinb at google.com Wed Jan 30 08:43:15 2013 From: kevinb at google.com (Kevin Bourrillion) Date: Wed, 30 Jan 2013 08:43:15 -0800 Subject: Some pullbacks In-Reply-To: <5107F387.1050507@oracle.com> References: <5107F387.1050507@oracle.com> Message-ID: Agreed. On Tue, Jan 29, 2013 at 8:06 AM, Brian Goetz wrote: > We would like to pull back two small features from the JSR-335 feature > plan: > > - private methods in interfaces > - "package modifier" for package-private visibility > > The primary reason is resourcing; cutting some small and inessential > features made room for deeper work on more important things like type > inference (on which we've made some big improvements lately!) Private > methods are also an incomplete feature; we'd like the full set of > visibilities, and limiting to public/private was already a compromise based > on what we thought we could get done in the timeframe we had. But it would > still be a rough edge that protected/package were missing. > > The second feature, while trivial (though nothing is really trivial), > loses a lot of justification without at least a move towards the full set > of accessibilities. As it stands, it is pretty far afield of lambda, > nothing else depends on it, and not doing it now does not preclude doing it > later. (The only remaining connection to lambda is accelerating the death > of the phrase "default visibility" to avoid confusion with default methods.) > > -- Kevin Bourrillion | Java Librarian | Google, Inc. | kevinb at google.com -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/lambda-spec-experts/attachments/20130130/3a6a1695/attachment.html From forax at univ-mlv.fr Wed Jan 30 11:35:29 2013 From: forax at univ-mlv.fr (Remi Forax) Date: Wed, 30 Jan 2013 20:35:29 +0100 Subject: Some pullbacks In-Reply-To: <5107F387.1050507@oracle.com> References: <5107F387.1050507@oracle.com> Message-ID: <51097601.8060607@univ-mlv.fr> On 01/29/2013 05:06 PM, Brian Goetz wrote: > We would like to pull back two small features from the JSR-335 feature > plan: > > - private methods in interfaces > - "package modifier" for package-private visibility > > The primary reason is resourcing; cutting some small and inessential > features made room for deeper work on more important things like type > inference (on which we've made some big improvements lately!) Private > methods are also an incomplete feature; we'd like the full set of > visibilities, and limiting to public/private was already a compromise > based on what we thought we could get done in the timeframe we had. > But it would still be a rough edge that protected/package were missing. > > The second feature, while trivial (though nothing is really trivial), > loses a lot of justification without at least a move towards the full > set of accessibilities. As it stands, it is pretty far afield of > lambda, nothing else depends on it, and not doing it now does not > preclude doing it later. (The only remaining connection to lambda is > accelerating the death of the phrase "default visibility" to avoid > confusion with default methods.) > The package modifier is only needed if we introduce method with package visibility in interface, we don't allow that so it doesn't pull its own weight. For private methods in interfaces, I have two fears, the first one, we don't provide a good answer if users want to share code between different default methods. The second one is that having a feature which is present in the VM but not accessible in Java often leads to bugs because the feature is not enough tested (it's harder to write tests if you have no way to express it in Java). Brian seams well aware of these trade-offs so I will trust Brian on this. R?mi From brian.goetz at oracle.com Wed Jan 30 11:51:17 2013 From: brian.goetz at oracle.com (Brian Goetz) Date: Wed, 30 Jan 2013 14:51:17 -0500 Subject: Some pullbacks In-Reply-To: <51097601.8060607@univ-mlv.fr> References: <5107F387.1050507@oracle.com> <51097601.8060607@univ-mlv.fr> Message-ID: <510979B5.6070604@oracle.com> > For private methods in interfaces, I have two fears, the first one, we > don't provide a good answer if users want to share code between > different default methods. Yes, this was the motivation for the feature in the first place. We're all in agreement (I think) that this is desirable. As a library writer, I want the full set of accessibilities for interface methods, and private methods would have been a good start. > The second one is that having a feature which > is present in the VM but not accessible in Java often leads to bugs > because the feature is not enough tested (it's harder to write tests if > you have no way to express it in Java). > Brian seams well aware of these trade-offs so I will trust Brian on this. Yes, having a VM feature that has no corresponding language surfacing does complicate testing. However, this is a regular state of affairs for our VM team; the VM has to deal with all sorts of horrible classfiles which could never come out of javac, or could only come out of javac as a result of tortured separate compilation (such as dealing with the case when a private method overrides a public one and is then overridden by a public method which makes a super call (correct behavior: super-call inherits "around" the private method, blech.)) Javac would never produce this classfile -- and that's not a bug -- but the VM still has to expect that situation. From daniel.smith at oracle.com Wed Jan 30 15:03:09 2013 From: daniel.smith at oracle.com (Dan Smith) Date: Wed, 30 Jan 2013 16:03:09 -0700 Subject: Overload resolution strategy In-Reply-To: <7A9D9C7E-336B-44F4-8F9B-49AE3649E1F7@oracle.com> References: <7A9D9C7E-336B-44F4-8F9B-49AE3649E1F7@oracle.com> Message-ID: I'd still love to hear feedback on this. (I know it's a lot to process.) ?Dan On Jan 14, 2013, at 3:42 PM, Dan Smith wrote: > Back in May, I wrote up some summaries of overloading design questions that we had been struggling with. Many involved concerns over how subtle differences in "implicit" lambda expressions (implicit meaning that the types of its parameters are inferred) might cause differences in overload resolution behavior, and whether this was a good idea. > > In the August EG meeting, the consensus was that we should try to be as "dumb" as is reasonable, mostly relying on things like the shape of a lambda for hints about which overload candidates to discard. And I was left to explore what that might look like and whether it would be viable. > > I've summarized the work we've done on this problem below. This will help to understand how we arrived at Part F of the 0.6.x spec; I'm also quite interested in feedback on our conclusions. (I know there's a lot to process here, but I'd really appreciate it if you could take the time to digest it. Overload resolution is, along with other aspects of inference, the hardest language design problem we've had to face, and the one with the fewest obvious answers.) > > --- > > Separately, we'd been looking at the problem of inference for combinator-like methods: methods for which the arguments provide no clues about what an implicit lambda's parameter types should be. We expect these to be very common. Example: > > Predicate p = Predicate.negate(s -> s.isEmpty()); > > Here, the 'negate' method is generic; the only way to know what the type argument of 'negate' should be is by looking at the assignment's target type ('Predicate'); and at this point, we have to have already resolved overloading. > > We type check lambda expressions in a "top-down" manner; this works even through multiple levels of lambda nesting. But, here, that top-down approach is defeated because method invocations don't pass information top down (from their return types to their parameter types). The solution is to change that. > > --- > > Basic framework: > > Both of these problems suggest that there may be times when we want to leave an implicit lambda expression untyped until after overload resolution is done. So applicability testing will depend on only some of the arguments. I call such cases "provisionally applicable" methods. > > Step 1: Potential Applicability. When a method has an implicit lambda expression argument that can't be typed, we can still look at the "shape" of the lambda -- its arity, whether it returns a value. This is accomplished via enhancements to the "potentially applicable" test. Before, we just looked at arity of the methods. Now, we also look for functional interface parameter types of a compatible shape. (Note that there's no type-related analysis that goes on at this stage, other than to identify functional interface types; we don't even recur on lambdas nested inside other lambdas, because that might require some inference work to decide what the nested lambda's target type will be.) > > Step 2: Applicability. Out of a pool of potentially-applicable methods, we need to determine which are applicable. The old process (Java 7) was, essentially, to perform subtyping tests for each argument. The new process needs to account for poly expressions by testing that the argument expression is compatible with its target type. Some analysis is used to determine which arguments should be used for applicability testing, and which should be left untyped. If one or more arguments is left untyped, the method is only _provisionally_ applicable. There's a spectrum of choices for how this works -- see the next section for details. > > Step 3: Most Specific. After we identify applicable methods, we try to identify the "most specific" one. This is based on the assumption that all candidates will work (i.e., not cause errors), so we're free to just choose whichever one seems best. But note that when we have a provisionally-applicable method, we can't be sure whether it will really work or not?not until we've typed the untyped lambda body. So it seems that the only reasonable thing to do when a provisionally-applicable method is involved is to skip the most-specific check. (This implies that, when there are multiple applicable candidates, an ambiguity error usually occurs.) > > Where there is a lambda expression argument, we've identified a few different conditions under which the most-specific check should prefer one functional interface type to another -- assuming the functional interfaces have the same descriptor parameter types. Where S is the return type of one and T is the return type of the other, S is preferred to T when: > - S <: T > - T is void > - S is primitive, T is reference, and all the lambda results are primitive > - S is reference, T is primitive, and all the lambda results are reference > - Both are functional interfaces, and S is preferred when we apply the rules recursively > > We had previously handled boxing/unboxing with the strict/loose phases of applicability, but this doesn't play nicely with provisional applicability: we don't want adding explicit lambda parameters to move a method from strict-applicable to loose-applicable. And enhancing the most specific analysis seems like a better way to fit with users' intuition about how this should work, anyway. > > --- > > Applicability testing: > > Given a set of potentially-applicable methods, we need to decide which are applicable. We have a spectrum of choices, falling between two extremes: > > Aggressive extreme: All implicit lambdas are speculatively typed during overload resolution, for each target type (different target types may lead to different lambda parameter types); this may cause inference to "force" some variables to be resolved during applicability testing; if any errors occur in the lambda bodies, the candidate method is not applicable. > > Example: > > List m(FileFilter f, Block b); > List m(Predicate p, Block b); > List l = m(x -> x.getParent() == null, t -> System.out.println(t)); > // 1st lambda only compatible with FileFilter; 2nd lambda forced to be a Block, > // causing a downstream assignment error > > Conservative extreme: All implicit lambdas remain untyped until after overload resolution, leaving a set of provisionally-applicable methods. If the set has more than one element, there is usually an ambiguity error. > > Examples: > > List l = m(x -> x.getParent() == null, t -> System.out.println(t)); > // ambiguity error > > interface Stream { > T map(Mapper mapper); // descriptor: E->T > byte map(ByteMapper mapper); // descriptor: E->byte > int map(IntMapper mapper); // descriptor: E->int > long map(LongMapper mapper); // descriptor: E->long > } > stream.map(x -> 23); > // ambiguity error > > Both of these extremes are unacceptable. The aggressive extreme fails to handle cases like 'negate'. The conservative extreme fails to handle some common overloading patterns like 'map'. > > We settled on two points a few steps inside of these extremes. > > Plan A: When the parameter types of an implicit lambda can't yet be inferred*, the lambda is left untyped and the method is provisionally applicable; otherwise, we speculatively type the lambda, allowing any errors (except exception checks -- see next section) in lambda bodies to impact applicability. (*Still experimenting with how to define "can't yet be inferred", but it should include cases like 'negate'.) > > Earlier, we had been considering type checking only part of a block lambda body -- the expressions appearing after 'return ...' -- but decided that was too hard to specify and explain, and too brittle. So this considers the whole body, including, e.g., any access of the lambda parameters' fields or methods, in order to find errors that would make the method inapplicable. > > Plan B: An implicit lambda is typed if all potentially-applicable methods share the same parameter types; otherwise, it remains untyped and the method is provisionally applicable. (For example, for the above 'map' method, the parameter type is always 'E'.) > > The difference between the two plans, essentially, is the conditions under which a lambda is left untyped. Plan B has more untyped lambdas (a superset): it adds an extra check between potential applicability testing and full applicability testing that will mark lambdas as untyped if they have inconsistent target parameter type lists. > > We explored both of these at length, and chose Plan A, but it's important to understand the trade-offs. > > Problems with Plan A: > > Plan A is essentially what we've had in mind for most of the evolution of the project, so why reconsider it? (Some of this repeats discussion from the August EG meeting...) > > - Heavier demands on type-checker implementations. When encountering mixed target parameter types, Plan B conservatively says "I'm not sure how to type this lambda yet so I'll skip it," while Plan A says "let's find all possible typings for this lambda." The javac engine supports speculative typing, and we're happy with it, but it would be nice not to bake that requirement into the language. > > - Theoretical complexity. When overloaded methods & lambdas are deeply nested (unlikely in typical programs), type checking has an exponential-time cost (exponent being the nesting depth). C# has this property, and it actually caused them practical problems down the road when LINQ was implemented via rewrites to nested lambdas: > http://blogs.msdn.com/b/ericlippert/archive/2007/03/26/lambda-expressions-vs-anonymous-methods-part-four.aspx > > - Hard to interpret programs. When there are mixed target parameter types, users need to think in terms of expressions in lambda bodies having multiple types ("if 'x' is a String, then this means ...; if 'x' is a Widget, then this means ..."). If lambdas are nested, there's a cross-product effect that makes this even worse. > > - Brittle when refactoring. Changing a lambda body in seemingly innocent ways might have unexpected overloading side-effects: maybe the overloading outcome depends on a particular invocation ('param.getLeftToe()'); changing or removing that expression might change which types for 'param' are considered acceptable. > > - Unhelpful failure mode. If something goes wrong as described by one of the above concerns, the user won't get immediate feedback. Instead, overload resolution simply chooses an interpretation for the lambda, and any problems will manifest themselves downstream, during type checking or even at runtime. > > It's worth emphasizing that all of these concerns deal with corner cases: to get something bad to happen, you need a pair of overloaded methods that give the lambda parameters different (but similar) types, and that have different return types or behavior. So "in the wild" experience with these issues is not likely to be common. The concern is mostly about having a clean, easy-to-understand model, and about avoiding rare "I have no idea what the compiler is doing" moments. > > Problems with Plan B: > > - Does not play well with generic methods. If type argument inference is a prerequisite to determining the parameter types of the lambda, we probably have to assume that the unresolved inference variables represent different types, even though they may ultimately be inferred to be the same thing. Example: > > static T map(Stream stream, Mapper mapper); // descriptor: E1 -> T > static byte map(Stream stream, ByteMapper mapper); // descriptor: E2 -> byte > ... > > (Here I've "flattened" the above 'map' instance methods into static methods.) > > While it may be "obvious" that the Mapper and ByteMapper always have the same parameter types, it seems really hard to define that "obviously the same" intuition accurately. > > So sets of generic (usually static) methods like this would probably need to be defined with different method names. The availability of default methods makes this a much more manageable concern, but it's still a concern. > > - It's an unproven idea that might not provide enough power. We spent a good deal of effort trying to identify a common overloading pattern that requires multiple methods that accept functions with different parameter types. Surprisingly, the "same parameter types" intuition seems to pretty effectively cover the way people actually overload function-accepting methods (based on an exploration of LINQ, Scala collections, and a few other APIs). Unfortunately, demonstrating the viability of Plan B is, loosely, a semi-decidable problem: it may be possible to produce a counter-example, but until we find one, we're stuck with "I don't know" as an answer. > > So there's a risk that we release Java 8 with Plan B and then find a lot of people complaining that overload resolution is giving them ambiguity errors when there's "clearly" no ambiguity. That could be addressed by adding power in Java 9, but our hands would be tied in the mean time. > > - Adds some complexity to the "applicability" concept. In order to understand whether a lambda argument will influence applicability or not, you've got to collect all the potentially-applicable methods and decide whether they have the same parameter types. That's another layer inserted into overload resolution. > > Summary: > > Plan B offers a sensible way to avoid a lot of tricky problems with Plan A, but does so by adopting the risk that it won't be powerful enough. Plan B can be viewed as adding a guardrail to keep people out of nasty corner cases; the risk is that the guardrail will encroach on the highway. In our estimation, the risk is too great, and so we prefer to accept the theoretical problems and quirks of Plan A. > > That said, I'm curious to hear opinions on how we've weighed the various concerns. > > --- > > Exception checking: > > There's a general intuition that exceptions being thrown in lambda bodies should not impact overload resolution, since it's such a subtle property of the code. (This discussion applies to lambda with either implicit or explicit parameter types.) > > More fundamentally, exception checking may depend on the results of inference, which we don't get until after overload resolution is done: > > T foo() throws T; > > interface StringFuction { T f(String s) throws IOException; } > > List m(StringFunction f); > > List l = m((String s) -> foo()); // 'foo' throws IOException, which is fine > List l = m((String s) -> foo()); // 'foo' throws SQLException, which is an error > > Another exception check ensures that a 'catch' clause does not try to catch an exception that is never thrown. This, too, cannot be checked before inference is done: > > List l = m((String s) -> { > try { return foo(); } > catch (IOException e) { } // okay, 'foo' throws IOException > ); > > List l = m((String s) -> { > try { return foo(); } > catch (IOException e) { } // error, 'foo' doesn't throw IOException > ); > > Conclusion: exception-checking errors must be special-cased, excluded from the definition of lambda compatibility and method applicability.