[vectorIntrinsics+fp16] RFR: 8365967: C2 compiler support for HalffloatVector operations supported by auto-vectorization flow
Jatin Bhateja
jbhateja at openjdk.org
Fri Aug 29 12:10:39 UTC 2025
On Fri, 22 Aug 2025 17:39:18 GMT, Jatin Bhateja <jbhateja at openjdk.org> wrote:
> Hi All,
>
> This patch extends VectorAPI inline expanders to infer Float16 vector IR based on the newly passed operType argument.
> We intend to leverage the existing IR and backend implementation of auto-vectorized Float16 operations.
> Various HalffloatVector operators, namely ADD, SUB, MUL, DIV, MAX, MIN, and FMA, now emit FP16 ISA on x86 targets supporting AVX512-FP16 feature and AArch64 SVE targets.
>
> Please note that the patch targets **vectorIntrinsics+fp16** branch and is based on top of https://github.com/openjdk/panama-vector/pull/230
>
> What is remaining?
> - Functional validation
> - Performance validation
> - New IR framework-based tests.
> - Microbenchmark for FP16-based dotproduct.
>
> Best Regards,
> Jatin
Performance of the [FMA benchmark](https://github.com/jatin-bhateja/external_staging/blob/main/Code/java/vector-api/half-float/FmaTest.java) on Intel Xeon Emerald Rapids : INTEL(R) XEON(R) PLATINUM 8581C CPU @ 2.30GHz
<img width="1392" height="175" alt="image" src="https://github.com/user-attachments/assets/8120ebb3-3e74-4242-b0a2-f1ffb94d6474" />
-------------
PR Comment: https://git.openjdk.org/panama-vector/pull/231#issuecomment-3220357628
More information about the panama-dev
mailing list