RFR: 8311815: Incorrect exhaustivity computation [v2]
Vicente Romero
vromero at openjdk.org
Fri Jul 14 18:53:29 UTC 2023
On Fri, 14 Jul 2023 08:32:03 GMT, Jan Lahoda <jlahoda at openjdk.org> wrote:
>> Consider code like:
>>
>> class Test22 {
>>
>> record Pair(I i1, I i2) {}
>>
>> sealed interface I {}
>>
>> record C() implements I {}
>>
>> record D() implements I {}
>>
>> void exhaustinvenessWithInterface(Pair pairI) {
>> switch (pairI) {
>> case Pair(D fst, C snd) -> {}
>> case Pair(C fst, C snd) -> {}
>> case Pair(C fst, I snd) -> {}
>> case Pair(D fst, D snd) -> {}
>> }
>> }
>> }
>>
>>
>> when evaluating exhaustivity, the outcome is wrong: if `Pair(D fst, C snd)` and `Pair(C fst, C snd)` get merged into `Pair(I _, C_)` (and the existing patterns are removed from the pattern set), the remaining patterns are `Pair(I _, C_)`, `Pair(C fst, I snd)` and `Pair(D fst, D snd)`. The existing implementation does not know how to continue from here (it would need to replace `Pair(C fst, I snd)` with `Pair(C fst, D snd)`, which would allow to unify `Pair(C fst, D snd)` and `Pair(D fst, D snd)` into `Pair(I _, D _)` and then continue, but that is tricky, as such speculation may not lead to a good result, and we would need to implement backtracking).
>>
>> But, if `Pair(D fst, C snd)` and `Pair(D fst, D snd)` was merged into `Pair(D _, I _)`, it could then be merge with `Pair(C fst, I snd)` to produce `Pair(I _, I _)`.
>>
>> The proposal here is aiming to allow the second path, by not removing the existing patterns when merging sealed subtypes. I.e. starting with `{Pair(D _, C _), Pair(C _, C_), Pair(C _, I _), Pair(D _, D _)}`, we would merge to `{Pair(D _, C _), Pair(C _, C_), Pair(C _, I _), Pair(D _, D _), *Pair(I _, C_)*}`, and then `{Pair(D _, C _), Pair(C _, C_), Pair(C _, I _), Pair(D _, D _), Pair(I _, C_), *Pair(D _, I _)*}`, which would then lead to `Pair(I _, I _)`, leading to `Pair` and confirmed exhaustivity.
>>
>> This can be achieved easily by not removing binding in `reduceBindingPatterns`. The problem is that when we stop removing bindings, the algorithm may get into an infinite loop re-adding the same binding over and over. The solution to that is using Sets instead of Lists. To make the algorithm faster and more stable, there are two more additions: a) all record patterns covered by binding patterns are removed from the pattern set; b) `reduceNestedPatterns` attempts to produce results for all mismatching positions without returning (otherwise the algorithm may constantly only process the first mismatching position without progressing).
>
> Jan Lahoda has updated the pull request with a new target base due to a merge or a rebase. The incremental webrev excludes the unrelated changes brought in by the merge/rebase. The pull request contains three additional commits since the last revision:
>
> - Merge branch 'master' into JDK-8311815
> - 8311815: Incorrect exhaustivity computation
> - 8311038: Incorrect exhaustivity computation
src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Flow.java line 3511:
> 3509: public RecordPattern(Type recordType, Type[] fullComponentTypes, PatternDescription[] nested) {
> 3510: this(recordType, hashCode(-1, recordType, nested), fullComponentTypes, nested);
> 3511: if ("test.Test.R(test.Test.C _, test.Test.R(test.Test.C _, test.Test.R(test.Test.C _, test.Test.C _, test.Test.C _), test.Test.I _), test.Test.I _)".equals(toString())) {
sorry missed this, this seems like testing code that should be removed right?
-------------
PR Review Comment: https://git.openjdk.org/jdk/pull/14872#discussion_r1264044634
More information about the compiler-dev
mailing list