JSR 335 Lambda Specification, 0.6.2
daniel.smith at oracle.com
Fri Mar 22 15:59:38 PDT 2013
On Mar 21, 2013, at 11:48 AM, Stephan Herrmann <stephan.herrmann at berlin.de> wrote:
> > This fills in many of the missing pieces in Part G, Type Inference.
> Let me start by saying that this version of the spec is a big step
> towards dispelling my previous concerns. After implementing most
> of Part G in the Eclipse compiler I can confirm that the inference
> works well as specified, but some concerns remain.
Glad to have another implementation, and to hear that the initial effort was not too bumpy.
> Let me illustrate the status by some test figures:
> I'm using one particular chapter of our test suite that is likely
> to trigger interesting inference issues.
> Total number of tests: 1493
> Regressions when using new type inference: 159
> Of these 159 regressions the majority (111) can be directly
> ascribed to the following TODOs in the spec, section 18.2.3:
> "To do: define the parameterization of a class C for a type T;
> define the most specific array supertype of a type T."
> Could you give an ETA for a fix to these TODOs? In this case
> an informal sketch would already be quite helpful.
This is a long-standing problem with JLS. See, for example, from JLS 7 18.104.22.168:
"If F has the form G<..., Yk-1, U, Yk+1, ...>, where U is a type expression that involves Tj, then if A has a supertype of the formG<..., Xk-1, V, Xk+1, ...> where V is a type expression, this algorithm is applied recursively to the constraint V = U."
The phrase "has a supertype" doesn't bother to explain how this supertype is derived. Add wildcards to the mix, and it's a bit of a mess.
In practice, this has typically been handled by capturing A and recurring on its supertype, but that can lead to some unsound results, where new capture variables end up as inference variable bounds. The goal here is to come up with a well-defined, sound alternative. But in the mean time, whatever you were doing for Java 7 should continue to work as an approximation, and I wouldn't expect any regressions.
Pretty much the same comment for the array supertype, although in that case I don't think there are any really interesting problems to tackle. The Java 7 spec just doesn't do a very good job with things like intersection types, variables with variable bounds, etc.
> Additionally, I could use a minor clarification on the following
> items from 18.4.:
> * If αi has one or more proper lower bounds, L1, ..., Lk, then Ti = lub(L1, ..., Lk).
> * Otherwise, where αi has proper upper bounds U1, ..., Uk, Ti = glb(U1, ..., Uk)."
> These items don't explicitly specify how mixed proper and improper
> bounds should be handled. I assume for this part to apply, *all*
> bounds of the given kind must be proper bounds, right?
> I first interpreted this as a filter on the set of bounds, but
> that seems to yield bogus results.
It should be a filter. The subset of bounds that are proper bounds are used, and the rest are ignored.
Then you test (via incorporation) whether this choice actually satisfies all the bounds, and if it does, great. If not, you proceed to the next step, which performs a capture-like operation to create a type variable representing the solution.
Keep in mind that the order in which variables are resolved should have the effect of turning dependencies (alpha <: Foo<beta>) into proper bounds (alpha <: Foo<String>) before you get to this point. The only time that doesn't work is when there are circular dependencies (alpha <: Foo<beta>, beta <: Bar<alpha> -- or just alpha <: Foo<alpha>).
This area -- what if there are inference variables in the bounds? -- has been a mess in JLS and javac, with various attempts to patch it. So I wouldn't be surprised if Eclipse's current behavior is pretty ad hoc.
More information about the lambda-spec-experts