RFR: 8311815: Incorrect exhaustivity computation [v2]
Jan Lahoda
jlahoda at openjdk.org
Fri Jul 14 08:32:03 UTC 2023
> Consider code like:
>
> class Test22 {
>
> record Pair(I i1, I i2) {}
>
> sealed interface I {}
>
> record C() implements I {}
>
> record D() implements I {}
>
> void exhaustinvenessWithInterface(Pair pairI) {
> switch (pairI) {
> case Pair(D fst, C snd) -> {}
> case Pair(C fst, C snd) -> {}
> case Pair(C fst, I snd) -> {}
> case Pair(D fst, D snd) -> {}
> }
> }
> }
>
>
> when evaluating exhaustivity, the outcome is wrong: if `Pair(D fst, C snd)` and `Pair(C fst, C snd)` get merged into `Pair(I _, C_)` (and the existing patterns are removed from the pattern set), the remaining patterns are `Pair(I _, C_)`, `Pair(C fst, I snd)` and `Pair(D fst, D snd)`. The existing implementation does not know how to continue from here (it would need to replace `Pair(C fst, I snd)` with `Pair(C fst, D snd)`, which would allow to unify `Pair(C fst, D snd)` and `Pair(D fst, D snd)` into `Pair(I _, D _)` and then continue, but that is tricky, as such speculation may not lead to a good result, and we would need to implement backtracking).
>
> But, if `Pair(D fst, C snd)` and `Pair(D fst, D snd)` was merged into `Pair(D _, I _)`, it could then be merge with `Pair(C fst, I snd)` to produce `Pair(I _, I _)`.
>
> The proposal here is aiming to allow the second path, by not removing the existing patterns when merging sealed subtypes. I.e. starting with `{Pair(D _, C _), Pair(C _, C_), Pair(C _, I _), Pair(D _, D _)}`, we would merge to `{Pair(D _, C _), Pair(C _, C_), Pair(C _, I _), Pair(D _, D _), *Pair(I _, C_)*}`, and then `{Pair(D _, C _), Pair(C _, C_), Pair(C _, I _), Pair(D _, D _), Pair(I _, C_), *Pair(D _, I _)*}`, which would then lead to `Pair(I _, I _)`, leading to `Pair` and confirmed exhaustivity.
>
> This can be achieved easily by not removing binding in `reduceBindingPatterns`. The problem is that when we stop removing bindings, the algorithm may get into an infinite loop re-adding the same binding over and over. The solution to that is using Sets instead of Lists. To make the algorithm faster and more stable, there are two more additions: a) all record patterns covered by binding patterns are removed from the pattern set; b) `reduceNestedPatterns` attempts to produce results for all mismatching positions without returning (otherwise the algorithm may constantly only process the first mismatching position without progressing).
Jan Lahoda has updated the pull request with a new target base due to a merge or a rebase. The incremental webrev excludes the unrelated changes brought in by the merge/rebase. The pull request contains three additional commits since the last revision:
- Merge branch 'master' into JDK-8311815
- 8311815: Incorrect exhaustivity computation
- 8311038: Incorrect exhaustivity computation
-------------
Changes:
- all: https://git.openjdk.org/jdk/pull/14872/files
- new: https://git.openjdk.org/jdk/pull/14872/files/b5b1515b..1f5c1735
Webrevs:
- full: https://webrevs.openjdk.org/?repo=jdk&pr=14872&range=01
- incr: https://webrevs.openjdk.org/?repo=jdk&pr=14872&range=00-01
Stats: 75624 lines in 1342 files changed: 18076 ins; 51990 del; 5558 mod
Patch: https://git.openjdk.org/jdk/pull/14872.diff
Fetch: git fetch https://git.openjdk.org/jdk.git pull/14872/head:pull/14872
PR: https://git.openjdk.org/jdk/pull/14872
More information about the compiler-dev
mailing list