RFR: 8294588: Auto vectorize half precision floating point conversion APIs
Jatin Bhateja
jbhateja at openjdk.org
Fri Dec 2 18:57:16 UTC 2022
On Fri, 2 Dec 2022 04:22:39 GMT, Smita Kamath <svkamath at openjdk.org> wrote:
> Hi All,
>
> I have added changes for autovectorizing Float.float16ToFloat and Float.floatToFloat16 API's.
> Following are the performance numbers of JMH micro Fp16ConversionBenchmark:
> Before code changes:
> Benchmark | (size) | Mode | Cnt | Score | Error | Units
> Fp16ConversionBenchmark.float16ToFloat | 2048 | thrpt | 3 | 1044.653 | ± 0.041 | ops/ms
> Fp16ConversionBenchmark.float16ToFloatMemory | 2048 | thrpt | 3 | 2341529.9 | ± 11765.453 | ops/ms
> Fp16ConversionBenchmark.floatToFloat16 | 2048 | thrpt | 3 | 2156.662 | ± 0.653 | ops/ms
> Fp16ConversionBenchmark.floatToFloat16Memory | 2048 | thrpt | 3 | 2007988.1 | ± 361.696 | ops/ms
>
> After:
> Benchmark | (size) | Mode | Cnt | Score | Error | Units
> Fp16ConversionBenchmark.float16ToFloat | 2048 | thrpt | 3 | 20460.349 |± 372.327 | ops/ms
> Fp16ConversionBenchmark.float16ToFloatMemory | 2048 | thrpt | 3 | 2342125.200 |± 9250.899 |ops/ms
> Fp16ConversionBenchmark.floatToFloat16 | 2048 | thrpt | 3 | 22553.977 |± 483.034 | ops/ms
> Fp16ConversionBenchmark.floatToFloat16Memory | 2048 | thrpt | 3 | 2007899.797 |± 150.296 | ops/ms
>
> Kindly review and share your feedback.
>
> Thanks.
> Smita
src/hotspot/share/opto/vectornode.hpp line 1630:
> 1628: };
> 1629:
> 1630: class HF2FVNode : public VectorNode {
You may use same naming convention as used for other vector casting IR nodes
VectorCastH2F and F2H
-------------
PR: https://git.openjdk.org/jdk/pull/11471
More information about the hotspot-compiler-dev
mailing list