RFR: 8346236: Auto vectorization support for various Float16 operations [v6]
Emanuel Peter
epeter at openjdk.org
Tue Mar 25 08:01:21 UTC 2025
On Sat, 22 Mar 2025 17:55:27 GMT, Jatin Bhateja <jbhateja at openjdk.org> wrote:
>> This is a follow-up PR for https://github.com/openjdk/jdk/pull/22754
>>
>> The patch adds support to vectorize various float16 scalar operations (add/subtract/divide/multiply/sqrt/fma).
>>
>> Summary of changes included with the patch:
>> 1. C2 compiler New Vector IR creation.
>> 2. Auto-vectorization support.
>> 3. x86 backend implementation.
>> 4. New IR verification test for each newly supported vector operation.
>>
>> Following are the performance numbers of Float16OperationsBenchmark
>>
>> System : Intel(R) Xeon(R) Processor code-named Granite rapids
>> Frequency fixed at 2.5 GHz
>>
>>
>> Baseline
>> Benchmark (vectorDim) Mode Cnt Score Error Units
>> Float16OperationsBenchmark.absBenchmark 1024 thrpt 2 4191.787 ops/ms
>> Float16OperationsBenchmark.addBenchmark 1024 thrpt 2 1211.978 ops/ms
>> Float16OperationsBenchmark.cosineSimilarityDequantizedFP16 1024 thrpt 2 493.026 ops/ms
>> Float16OperationsBenchmark.cosineSimilarityDoubleRoundingFP16 1024 thrpt 2 612.430 ops/ms
>> Float16OperationsBenchmark.cosineSimilaritySingleRoundingFP16 1024 thrpt 2 616.012 ops/ms
>> Float16OperationsBenchmark.divBenchmark 1024 thrpt 2 604.882 ops/ms
>> Float16OperationsBenchmark.dotProductFP16 1024 thrpt 2 410.798 ops/ms
>> Float16OperationsBenchmark.euclideanDistanceDequantizedFP16 1024 thrpt 2 602.863 ops/ms
>> Float16OperationsBenchmark.euclideanDistanceFP16 1024 thrpt 2 640.348 ops/ms
>> Float16OperationsBenchmark.fmaBenchmark 1024 thrpt 2 809.175 ops/ms
>> Float16OperationsBenchmark.getExponentBenchmark 1024 thrpt 2 2682.764 ops/ms
>> Float16OperationsBenchmark.isFiniteBenchmark 1024 thrpt 2 3373.901 ops/ms
>> Float16OperationsBenchmark.isFiniteCMovBenchmark 1024 thrpt 2 1881.652 ops/ms
>> Float16OperationsBenchmark.isFiniteStoreBenchmark 1024 thrpt 2 2273.745 ops/ms
>> Float16OperationsBenchmark.isInfiniteBenchmark 1024 thrpt 2 2147.913 ops/ms
>> Float16OperationsBenchmark.isInfiniteCMovBen...
>
> Jatin Bhateja has updated the pull request incrementally with one additional commit since the last revision:
>
> Removing Generator dependency on incubation module
I looked at the changes in `Generators.java`, thanks for adding some code there 😊
Some comments on it:
- You should add some Float16 tests to `test/hotspot/jtreg/testlibrary_tests/generators/tests/TestGenerators.java`.
- I am missing the "mixed distribution" function `float16s()`. As a reference, take `public Generator<Double> doubles()`. The idea is that we have a set of distributions, and we pick a random distribution every time in the tests.
- I'm also missing a "any bits" version, where you would take a random short value and reinterpret it as `Float16`. This ensures that we are getting all possible encodings, including multiple NaN encodings.
- All of this is probably enough code to make a separate PR.
test/hotspot/jtreg/compiler/vectorization/TestFloat16VectorOperations.java line 74:
> 72: short min_value = float16ToRawShortBits(Float16.MIN_VALUE);
> 73: short max_value = float16ToRawShortBits(Float16.MAX_VALUE);
> 74: Generator<Short> gen = G.mixedWithSpecialFloat16s(G.uniformFloat16s(min_value, max_value), 10, 2);
Here you would simply be using the `float16s` random distribution picker. Sometimes you would get uniform, sometimes special, sometimes mixed, sometimes any-bits, etc.
-------------
PR Review: https://git.openjdk.org/jdk/pull/22755#pullrequestreview-2712740608
PR Review Comment: https://git.openjdk.org/jdk/pull/22755#discussion_r2011516136
More information about the hotspot-compiler-dev
mailing list