RFR: 8294588: Auto vectorize half precision floating point conversion APIs [v9]
Vladimir Kozlov
kvn at openjdk.org
Thu Dec 8 00:40:57 UTC 2022
On Wed, 7 Dec 2022 23:31:24 GMT, Smita Kamath <svkamath at openjdk.org> wrote:
>> Hi All,
>>
>> I have added changes for autovectorizing Float.float16ToFloat and Float.floatToFloat16 API's.
>> Following are the performance numbers of JMH micro Fp16ConversionBenchmark:
>> Before code changes:
>> Benchmark | (size) | Mode | Cnt | Score | Error | Units
>> Fp16ConversionBenchmark.float16ToFloat | 2048 | thrpt | 3 | 1044.653 | ± 0.041 | ops/ms
>> Fp16ConversionBenchmark.float16ToFloatMemory | 2048 | thrpt | 3 | 2341529.9 | ± 11765.453 | ops/ms
>> Fp16ConversionBenchmark.floatToFloat16 | 2048 | thrpt | 3 | 2156.662 | ± 0.653 | ops/ms
>> Fp16ConversionBenchmark.floatToFloat16Memory | 2048 | thrpt | 3 | 2007988.1 | ± 361.696 | ops/ms
>>
>> After:
>> Benchmark | (size) | Mode | Cnt | Score | Error | Units
>> Fp16ConversionBenchmark.float16ToFloat | 2048 | thrpt | 3 | 20460.349 |± 372.327 | ops/ms
>> Fp16ConversionBenchmark.float16ToFloatMemory | 2048 | thrpt | 3 | 2342125.200 |± 9250.899 |ops/ms
>> Fp16ConversionBenchmark.floatToFloat16 | 2048 | thrpt | 3 | 22553.977 |± 483.034 | ops/ms
>> Fp16ConversionBenchmark.floatToFloat16Memory | 2048 | thrpt | 3 | 2007899.797 |± 150.296 | ops/ms
>>
>> Kindly review and share your feedback.
>>
>> Thanks.
>> Smita
>
> Smita Kamath has updated the pull request incrementally with one additional commit since the last revision:
>
> Updated test case and updated code as per review comment
I started new testing after verifying locally that test passed with `-XX:UseAVX=1`.
-------------
PR: https://git.openjdk.org/jdk/pull/11471
More information about the hotspot-compiler-dev
mailing list