[code-reflection] RFR: [hat] Proposal for bfloat16 [v5]
Juan Fumero
jfumero at openjdk.org
Wed Dec 3 14:04:02 UTC 2025
> This PR introduces the type [`bfloat16`](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus) for HAT.
>
> Testing for OpenCL:
>
>
> HAT=SHOW_CODE java -cp hat/job.jar hat.java test ffi-opencl hat.test.TestBFloat16Type
>
>
> Testing for CUDA:
>
>
> HAT=SHOW_CODE java -cp hat/job.jar hat.java test ffi-cuda hat.test.TestBFloat16Type
>
>
> Some tests are expecting to fail due to precision error. We will need to improve the type conversion with round-to-nearest-even for example.
Juan Fumero has updated the pull request incrementally with one additional commit since the last revision:
[hat] fix builtin copyBytes
-------------
Changes:
- all: https://git.openjdk.org/babylon/pull/716/files
- new: https://git.openjdk.org/babylon/pull/716/files/22d0ef65..b5dfe87f
Webrevs:
- full: https://webrevs.openjdk.org/?repo=babylon&pr=716&range=04
- incr: https://webrevs.openjdk.org/?repo=babylon&pr=716&range=03-04
Stats: 7 lines in 2 files changed: 3 ins; 3 del; 1 mod
Patch: https://git.openjdk.org/babylon/pull/716.diff
Fetch: git fetch https://git.openjdk.org/babylon.git pull/716/head:pull/716
PR: https://git.openjdk.org/babylon/pull/716
More information about the babylon-dev
mailing list