[code-reflection] RFR: [hat] Initial support for 16-bits float types (half)

Juan Fumero jfumero at openjdk.org
Tue Oct 21 09:06:13 UTC 2025


This PR extends HAT with support for OpenCL and CUDA `half` types. 

>From the Java side:


F16Array.F16 ha = a.array(kernelContext.gix);
F16Array.F16 hb = b.array(kernelContext.gix);
hb.value(ha.value());


We can also perform some common operations with half types:


F16Array.F16 ha = a.array(kernelContext.gix);
F16Array.F16 hb = b.array(kernelContext.gix);

F16Array.F16 r1 = F16.mul(ha, hb);
F16Array.F16 r2 = F16.div(ha, hb);
F16Array.F16 r3 = F16.sub(ha, hb);
F16Array.F16 r4 = F16.add(r1, r2);
F16Array.F16 r5 = F16.mul(r4, r3);
F16Array.F16 hC = c.array(kernelContext.gix);


To run with FP16, HAT needs to run GPUs with HALF support for both OpenCL and CUDA backends. For example, for OpenCL, it needs to run on devices that support the `cl_khr_fp16` macro. 

In near future, we will extend with more operations. 

How to test (OpenCL):


HAT=SHOW_CODE,SHOW_COMPILATION_PHASES java @hat/otest ffi-opencl hat.test.TestFP16Type


CUDA: 


HAT=SHOW_CODE,SHOW_COMPILATION_PHASES java @hat/otest ffi-cuda hat.test.TestFP16Type

-------------

Commit messages:
 - [hat] hatVectorVarLoadOp moved to C99 builder
 - [hat] cleanup f16phase
 - [HAT] CUDA backend F16 initial support
 - [hat] missing header license added
 - Merge branch 'code-reflection' into hat/types/fp16
 - [hat] HalfType unittest fixed
 - [hat] HalfType for OpenCL and common operations
 - [hat] Initial implementation for FP16 and OpenCL backend

Changes: https://git.openjdk.org/babylon/pull/624/files
  Webrev: https://webrevs.openjdk.org/?repo=babylon&pr=624&range=00
  Stats: 986 lines in 27 files changed: 929 ins; 44 del; 13 mod
  Patch: https://git.openjdk.org/babylon/pull/624.diff
  Fetch: git fetch https://git.openjdk.org/babylon.git pull/624/head:pull/624

PR: https://git.openjdk.org/babylon/pull/624


More information about the babylon-dev mailing list