RFR: 8370691: Add new Float16Vector type and enable intrinsification of vector operations supported by auto-vectorizer [v5]

Bhavana Kilambi bkilambi at openjdk.org
Mon Dec 8 14:14:17 UTC 2025


On Wed, 26 Nov 2025 11:34:11 GMT, Jatin Bhateja <jbhateja at openjdk.org> wrote:

>> Add a new  Float16lVector type and corresponding concrete vector classes, in addition to existing primitive vector types, maintaining operation parity with the FloatVector type.
>> - Add necessary inline expander support.
>>    - Enable intrinsification for a few vector operations, namely ADD/SUB/MUL/DIV/MAX/MIN/FMA.
>> - Use existing Float16 vector IR and backend support.
>> - Extended the existing VectorAPI JTREG test suite for the newly added Float16Vector operations.
>>  
>> The idea here is to first be at par with Float16 auto-vectorization support before intrinsifying new operations (conversions, reduction, etc).
>> 
>> The following are the performance numbers for some of the selected Float16Vector benchmarking kernels compared to equivalent auto-vectorized Float16OperationsBenchmark kernels.
>> 
>> <img width="1449" height="629" alt="image" src="https://github.com/user-attachments/assets/b4474361-9886-4315-a614-f4073fd075b9" />
>> 
>> Initial RFP[1] was floated on the panama-dev mailing list.
>> 
>> Kindly review the draft PR and share your feedback.
>> 
>> Best Regards,
>> Jatin
>> 
>> [1] https://mail.openjdk.org/pipermail/panama-dev/2025-August/021100.html
>
> Jatin Bhateja has updated the pull request incrementally with one additional commit since the last revision:
> 
>   Cleanups

test/hotspot/jtreg/compiler/vectorapi/TestFloat16VectorOperations.java line 82:

> 80:         output = new short[LEN];
> 81: 
> 82:         short min_value = float16ToRawShortBits(Float16.MIN_VALUE);

`min_value` and `max_value` not being used anywhere?

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/28002#discussion_r2598793109


More information about the core-libs-dev mailing list