[lworld+fp16] RFR: 8338061: Add support for FP16 unary and ternary operations
Bhavana Kilambi
bkilambi at openjdk.org
Wed Aug 21 16:02:25 UTC 2024
On Tue, 20 Aug 2024 11:56:30 GMT, Jatin Bhateja <jbhateja at openjdk.org> wrote:
>> I have the conversion ops and isFinite/Infinite/NaN intrinsics still left to complete.
>
> I am in process of porting Float16 extension to VectorAPI into lworld+fp16, that will consolidate Float16 effort on lworld+fp16 branch and we can leverage the unified backend implementation. With VectorAPI we will also need to support Reductions for Float16 vectors :-)
@jatin-bhateja Thanks for the effort. It will be good to test Float16 with vectorAPI and add the missing operations. I think we need to support Float16 reductions in auto-vectorization as well? I dont think those operations are added for FP16.
Also about the design part, I was thinking instead of modifying the existing opcode or adding a separate secondary opcode for the operations, can we reuse the same existing FP32 nodes for FP16 as well? We can add flags for the FP32 IR nodes to check if it is a FP16 op or eventually FP8 op ? That way we can reuse the same Value(), Ideal(), identity() methods and can modify those functions based on the value these flags returns in case we do not want a particular optimization to be applied for FP16/FP8? In the backend, we can use these flags to emit the correct instruction with the correct SIMD arrangement ? I guess this might introduce a lot of checks (to check value of the flags) in the compiler code.. What are your thoughts about it?
-------------
PR Review Comment: https://git.openjdk.org/valhalla/pull/1211#discussion_r1725346759
More information about the valhalla-dev
mailing list