RFR: 8315024: Vector API FP reduction tests should not test for exact equality [v2]

Gergö Barany gbarany at openjdk.org
Fri Oct 13 13:05:39 UTC 2023


On Thu, 12 Oct 2023 09:28:55 GMT, Gergö Barany <gbarany at openjdk.org> wrote:

>> Generally looks good, thanks for looking into this.
>> I left a few comments below.
>> 
>> Another concern I have, which I ran into by writing tests for the auto-vectorizer:
>> Are we making sure the float/double reductions do not degenerate to either zero or infinity? Because if they do degenerate, then we have only a very weak test.
>> I'm especially worried about all the values that depend on `i`, and then get multiplied. Don't the multiplications hit the maximal float value very quickly?
>
> @eme64 would you have time to take another look at the changes I have made to this PR?

> @gergo- I just looked at it again. It looks better.

Thanks.

> Still, I have a concern about `cornerCaseValues`: Does the result not always degenerate to zero / NaN now? Every 17th value is a corner-case. And there, we mix in all of the special cases deterministically (zero, infty, min/max, NaN), depending on INT/FP type. But it seems that way we always have a NaN in the FP array - and therefore the reduction would always be NaN. Similar things happen with int-multiplication and zero.

Currently there are reduction tests that reduce not across the whole input array but over individual vector-sized blocks, e.g.:

        for (int ic = 0; ic < INVOC_COUNT; ic++) {
            for (int i = 0; i < a.length; i += SPECIES.length()) {
                DoubleVector av = DoubleVector.fromArray(SPECIES, a, i);
                r[i] = av.reduceLanes(VectorOperators.MUL);
            }
        }


The largest vector length is 16 elements (32 bit float * 16 = 512 bits max vector size). Therefore at most every second block will contain one corner-case value, and all the other blocks will only contain normal values. No block mixes different corner case values.

> I wonder if it would not be better to generate some data randomly, and throw in the special-cases with a very low probability. Maybe 50% that any show up, and then randomly pick one or more special case values. That way you can test the different special-cases separately. And their position could also be random.

I would say that this could be tackled more naturally as part of https://bugs.openjdk.org/browse/JDK-8309647 which concerns moving reductions out of loops and would require revisiting these tests anyway.

> BTW: is there any wiki about the template file format, and how to "compile" it to java? I might want to use it in the future myself :)

I'm not aware of any docs, I learned to do this by doing. I just do `bash gen-tests.sh` to generate the Java code.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/16024#issuecomment-1761486838


More information about the hotspot-compiler-dev mailing list