[code-reflection] Integrated: [hat] Initial support for 16-bits float types (half)
Juan Fumero
jfumero at openjdk.org
Tue Oct 21 09:33:04 UTC 2025
On Tue, 21 Oct 2025 08:40:11 GMT, Juan Fumero <jfumero at openjdk.org> wrote:
> This PR extends HAT with support for OpenCL and CUDA `half` types.
>
> From the Java side:
>
>
> F16Array.F16 ha = a.array(kernelContext.gix);
> F16Array.F16 hb = b.array(kernelContext.gix);
> hb.value(ha.value());
>
>
> We can also perform some common operations with half types:
>
>
> F16Array.F16 ha = a.array(kernelContext.gix);
> F16Array.F16 hb = b.array(kernelContext.gix);
>
> F16Array.F16 r1 = F16.mul(ha, hb);
> F16Array.F16 r2 = F16.div(ha, hb);
> F16Array.F16 r3 = F16.sub(ha, hb);
> F16Array.F16 r4 = F16.add(r1, r2);
> F16Array.F16 r5 = F16.mul(r4, r3);
> F16Array.F16 hC = c.array(kernelContext.gix);
>
>
> To run with FP16, HAT needs to run GPUs with HALF support for both OpenCL and CUDA backends. For example, for OpenCL, it needs to run on devices that support the `cl_khr_fp16` macro.
>
> In near future, we will extend with more operations.
>
> How to test (OpenCL):
>
>
> HAT=SHOW_CODE,SHOW_COMPILATION_PHASES java @hat/otest ffi-opencl hat.test.TestFP16Type
>
>
> CUDA:
>
>
> HAT=SHOW_CODE,SHOW_COMPILATION_PHASES java @hat/otest ffi-cuda hat.test.TestFP16Type
This pull request has now been integrated.
Changeset: 5128c70b
Author: Juan Fumero <jfumero at openjdk.org>
URL: https://git.openjdk.org/babylon/commit/5128c70b87c209d781307cf583826a2f5b8a1a8d
Stats: 986 lines in 27 files changed: 929 ins; 44 del; 13 mod
[hat] Initial support for 16-bits float types (half)
-------------
PR: https://git.openjdk.org/babylon/pull/624
More information about the babylon-dev
mailing list