RFR: 8294588: Auto vectorize half precision floating point conversion APIs
Jatin Bhateja
jbhateja at openjdk.org
Fri Dec 2 19:24:08 UTC 2022
On Fri, 2 Dec 2022 04:22:39 GMT, Smita Kamath <svkamath at openjdk.org> wrote:
> Hi All,
>
> I have added changes for autovectorizing Float.float16ToFloat and Float.floatToFloat16 API's.
> Following are the performance numbers of JMH micro Fp16ConversionBenchmark:
> Before code changes:
> Benchmark | (size) | Mode | Cnt | Score | Error | Units
> Fp16ConversionBenchmark.float16ToFloat | 2048 | thrpt | 3 | 1044.653 | ± 0.041 | ops/ms
> Fp16ConversionBenchmark.float16ToFloatMemory | 2048 | thrpt | 3 | 2341529.9 | ± 11765.453 | ops/ms
> Fp16ConversionBenchmark.floatToFloat16 | 2048 | thrpt | 3 | 2156.662 | ± 0.653 | ops/ms
> Fp16ConversionBenchmark.floatToFloat16Memory | 2048 | thrpt | 3 | 2007988.1 | ± 361.696 | ops/ms
>
> After:
> Benchmark | (size) | Mode | Cnt | Score | Error | Units
> Fp16ConversionBenchmark.float16ToFloat | 2048 | thrpt | 3 | 20460.349 |± 372.327 | ops/ms
> Fp16ConversionBenchmark.float16ToFloatMemory | 2048 | thrpt | 3 | 2342125.200 |± 9250.899 |ops/ms
> Fp16ConversionBenchmark.floatToFloat16 | 2048 | thrpt | 3 | 22553.977 |± 483.034 | ops/ms
> Fp16ConversionBenchmark.floatToFloat16Memory | 2048 | thrpt | 3 | 2007899.797 |± 150.296 | ops/ms
>
> Kindly review and share your feedback.
>
> Thanks.
> Smita
src/hotspot/cpu/x86/x86.ad line 3684:
> 3682: %}
> 3683:
> 3684: instruct vconvF2HF(vec dst, vec src) %{
We do have a destination memory flavour of VCVTPS2PH, adding a memory pattern will fold subsequent store in one instruction.
-------------
PR: https://git.openjdk.org/jdk/pull/11471
More information about the hotspot-compiler-dev
mailing list