[code-reflection] RFR: [hat] Extensions of F16 (API and codegen)

Juan Fumero jfumero at openjdk.org
Wed Nov 5 12:18:31 UTC 2025


- New dialect nodes to do `OpConvert` from `float` to `half` and `half` to `float`: note that conversions differ between CUDA and OpenCL
- Add Fluent API for F16 operations (similar to `Float4`)
- Add mixed float-precision operations (`F16` -> `float`)
- Refactor of `F16` interface to a new file
- Fix <struct/union> parenthesis in the codegen 
- Initialization of `F16` values using the `float2F16` builtin and `Float16Tofloat` also allowed on the GPU

-------------

Commit messages:
 - [hat] minor change
 - [hat] F16 with mixed float types supported for CUDA backend
 - [hat] F16 ops with mixed f32 operations
 - [hat][f16] Concatenation of F16 operations supported
 - Merge branch 'code-reflection' into hat/fp16/extension
 - [hat][f16] WIP for local memory
 - minor recformating
 - patch for matmul in fp16
 - [hat] refine F16Phase
 - [hat] matmul express with F16: wip
 - ... and 6 more: https://git.openjdk.org/babylon/compare/f4c7e327...a6673799

Changes: https://git.openjdk.org/babylon/pull/663/files
  Webrev: https://webrevs.openjdk.org/?repo=babylon&pr=663&range=00
  Stats: 1158 lines in 21 files changed: 1006 ins; 73 del; 79 mod
  Patch: https://git.openjdk.org/babylon/pull/663.diff
  Fetch: git fetch https://git.openjdk.org/babylon.git pull/663/head:pull/663

PR: https://git.openjdk.org/babylon/pull/663


More information about the babylon-dev mailing list