Enhancing expressions with mutability?

B. Blaser bsrbnd at gmail.com
Sun Apr 5 18:25:25 UTC 2020


On Wed, 1 Apr 2020 at 22:59, Maurizio Cimadamore
<maurizio.cimadamore at oracle.com> wrote:
>
> On 01/04/2020 19:32, B. Blaser wrote:
> > So, here is a concrete experiment:
> >
> > https://bugs.openjdk.java.net/secure/attachment/87565/learning.patch
> >
> > Quoting from my comment on JBS, it uses a parallel genetic algorithm
> > to solve all contextual inference variables at once requiring only a
> > small subset of propagation steps which reduces JDK-8152289 example's
> > compilation time to a couple of seconds! Note that this heuristic is
> > dedicated to similar examples and the usual inference algorithm is
> > still used if the former doesn't converge rapidly enough.
> >
> > The current design is absolutely not affected as this implementation
> > additionally uses very expressive mutable expressions like "Term gene
> > = (`able).link(`adn, `undetvars.get(i), `type());" employing the
> > quotation operator `. It simply requires a boot JDK (14) including the
> > patch minus inference changes in Attr, Infer & InferenceContext.
> >
> > Still not convinced?
>
> Sadly yes :-)

Paradoxically, I agree with you, a few questions still remain...

> I mentioned a number of issues, performances aside, in my previous
> email; error recovery seems particularly nasty with an approach such as
> this,

Yes, this is the most problematic issue but I'm rather confident that
if a solution exists the genetic algorithm will converge rapidly
enough (less than 10 generations) otherwise we can consider that an
error occurred. Of course, this would have to be confirmed with many
measures but as far as I know,
* either the context has few inference variables (1 to 4) and a clever
choice of the initial population might simply perform an exhaustive
combinatorial search using multiple threads,
* or the context has potentially many variables which are most of the
time exact copies of themselves resulting in very low liberty-degrees
and thus a very fast convergence too.

Of course, you might argue that this genetic engine might fail to
infer valid expressions but I believe this is inherent to inference
algorithms in general. Recall JDK-8219318, you said yourself that the
current engine might be "out of gas" in some situations, and if I
remember well, this unresolved issue comes from a weakness in
propagation rules which seems not so trivial to be solved... and
although the current genetic prototype isn't evolved enough to resolve
it yet, I'm rather confident that it might do that intrinsically with
some more maturation!

> but more generally, having two inference engines opens up a whole
> new can of worms, where now you have two implementations to maintain and
> keep in sync, and there are many more things that can go wrong - and
> testing becomes similarly much harder. What if the two algorithms don't
> agree? Might we end up in situations where, depending on how much time
> one engine takes to converge, we get an error or a failure because a
> different solver is used? This is the kind of problems that worry me.

That's right, we cannot live with two different engines. The current
prototype is only a first partial attempt, but the goal of this
experiment would be to finish with a complete genetic inference engine
to be compared with the existing more formal one.

> In my eyes, 8152289 is "just" a performance bug, and one that we know
> how to solve (*), but we did not get enough time to work on it. Also, as
> far as I know, the pathological cases described in that bug are the only
> thing left for us to solve performance-wise (e.g. when speaking about
> programs which keep running through inference for minutes). Coming up
> with a new inference engine to solve these classes of problem is, in my
> opinion, the wrong way to go about things. It might have been a nice
> validation for your mutable expression experiment,

Sure, it is ;-)

> but we can't consider
> that as a candidate fix for JDK-8152289, sorry.

Not as is, of course, as discussed above.

Bernard

> Maurizio


More information about the discuss mailing list