RFR: 8311815: Incorrect exhaustivity computation
Jan Lahoda
jlahoda at openjdk.org
Thu Jul 13 15:30:42 UTC 2023
Consider code like:
class Test22 {
record Pair(I i1, I i2) {}
sealed interface I {}
record C() implements I {}
record D() implements I {}
void exhaustinvenessWithInterface(Pair pairI) {
switch (pairI) {
case Pair(D fst, C snd) -> {}
case Pair(C fst, C snd) -> {}
case Pair(C fst, I snd) -> {}
case Pair(D fst, D snd) -> {}
}
}
}
when evaluating exhaustivity, the outcome is wrong: if `Pair(D fst, C snd)` and `Pair(C fst, C snd)` get merged into `Pair(I _, C_)` (and the existing patterns are removed from the pattern set), the remaining patterns are `Pair(I _, C_)`, `Pair(C fst, I snd)` and `Pair(D fst, D snd)`. The existing implementation does not know how to continue from here (it would need to replace `Pair(C fst, I snd)` with `Pair(C fst, D snd)`, which would allow to unify `Pair(C fst, D snd)` and `Pair(D fst, D snd)` into `Pair(I _, D _)` and then continue, but that is tricky, as such speculation may not lead to a good result, and we would need to implement backtracking).
But, if `Pair(D fst, C snd)` and `Pair(D fst, D snd)` was merged into `Pair(D _, I _)`, it could then be merge with `Pair(C fst, I snd)` to produce `Pair(I _, I _)`.
The proposal here is aiming to allow the second path, by not removing the existing patterns when merging sealed subtypes. I.e. starting with `{Pair(D _, C _), Pair(C _, C_), Pair(C _, I _), Pair(D _, D _)}`, we would merge to `{Pair(D _, C _), Pair(C _, C_), Pair(C _, I _), Pair(D _, D _), *Pair(I _, C_)*}`, and then `{Pair(D _, C _), Pair(C _, C_), Pair(C _, I _), Pair(D _, D _), Pair(I _, C_), *Pair(D _, I _)*}`, which would then lead to `Pair(I _, I _)`, leading to `Pair` and confirmed exhaustivity.
This can be achieved easily by not removing binding in `reduceBindingPatterns`. The problem is that when we stop removing bindings, the algorithm may get into an infinite loop re-adding the same binding over and over. The solution to that is using Sets instead of Lists. To make the algorithm faster and more stable, there are two more additions: a) all record patterns covered by binding patterns are removed from the pattern set; b) `reduceNestedPatterns` attempts to produce results for all mismatching positions without returning (otherwise the algorithm may constantly only process the first mismatching position without progressing).
-------------
Depends on: https://git.openjdk.org/jdk/pull/14711
Commit messages:
- 8311815: Incorrect exhaustivity computation
Changes: https://git.openjdk.org/jdk/pull/14872/files
Webrev: https://webrevs.openjdk.org/?repo=jdk&pr=14872&range=00
Issue: https://bugs.openjdk.org/browse/JDK-8311815
Stats: 105 lines in 2 files changed: 58 ins; 24 del; 23 mod
Patch: https://git.openjdk.org/jdk/pull/14872.diff
Fetch: git fetch https://git.openjdk.org/jdk.git pull/14872/head:pull/14872
PR: https://git.openjdk.org/jdk/pull/14872
More information about the compiler-dev
mailing list