JSR 335 Lambda Specification, 0.6.2

Stephan Herrmann stephan.herrmann at berlin.de
Mon Apr 1 09:15:42 PDT 2013

Hi Dan,

thanks for your answer. Let me translate my silence since
into a more explicit response.

>>    "To do: define the parameterization of a class C for a type T;
>>     define the most specific array supertype of a type T."
>> Could you give an ETA for a fix to these TODOs? In this case
>> an informal sketch would already be quite helpful.
> This is a long-standing problem with JLS.  See, for example, from JLS 7
> "If F has the form G<..., Yk-1, U, Yk+1, ...>, where U is a type expression that involves Tj, then if A has a supertype of the formG<..., Xk-1, V, Xk+1, ...> where V is a type expression, this algorithm is applied recursively to the constraint V = U."
> The phrase "has a supertype" doesn't bother to explain how this supertype is derived.  Add wildcards to the mix, and it's a bit of a mess.
> In practice, this has typically been handled by capturing A and recurring on its supertype, but that can lead to some unsound results, where new capture variables end up as inference variable bounds.  The goal here is to come up with a well-defined, sound alternative.

I wholeheartedly support your notion of improving the situation.

>  But in the mean time, whatever you were doing for Java 7 should continue to work as an approximation, and I wouldn't expect any regressions.

I should have mentioned that I observed these regressions *after* doing
a best guess as to which part of the old implementation might come
closest to what we need here. Close, but no cigar. I'm not surprised
by the mismatch given that the functional breakdown in the existing
Eclipse compiler implementation is probably quite different from the
code you are looking at.

Looking deeper into those tests that make this approximation fail
I see that most of these involve a raw type in the position of the
sub type (S).
Could you please give an advance statement how 18.5.5. will be
integrated into 18.2.3 and friends? More specifically, if unchecked
conversion is involved, should this be computed by a separate/nested
invocation of the inference, or should a set of additional bounds
and constraints be added to the current inference?

>> Additionally, I could use a minor clarification on the following
>> items from 18.4.:
>> "
>> * If αi has one or more proper lower bounds, L1, ..., Lk, then Ti = lub(L1, ..., Lk).
>> * Otherwise, where αi has proper upper bounds U1, ..., Uk, Ti = glb(U1, ..., Uk)."
>> These items don't explicitly specify how mixed proper and improper
>> bounds should be handled. I assume for this part to apply, *all*
>> bounds of the given kind must be proper bounds, right?
>> I first interpreted this as a filter on the set of bounds, but
>> that seems to yield bogus results.
> It should be a filter.  The subset of bounds that are proper bounds are used, and the rest are ignored.

Thanks, so my first interpretation was right, and the resulting
regression must be searched elsewhere.

So, yes, the spec is basically implementable, but, no, it doesn't work.
Or, sorry, it may work for some 70 percent of the interesting programs,
but not close to any of the many nines we are striving for.
As I see little benefit in building the new implementation on new
guess work, I'll basically just wait for the next version of the spec.

I'd appreciate any hint regarding a schedule for further spec updates.


More information about the lambda-spec-experts mailing list