[code-reflection] RFR: [hat] Proposal for bfloat16 [v3]

Juan Fumero jfumero at openjdk.org
Wed Dec 3 13:40:38 UTC 2025


> This PR introduces the type [`bfloat16`](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus) for HAT.
> 
> Testing for OpenCL:
> 
> 
> HAT=SHOW_CODE java -cp hat/job.jar hat.java test ffi-opencl hat.test.TestBFloat16Type
> 
> 
> Testing for CUDA:
> 
> 
> HAT=SHOW_CODE java -cp hat/job.jar hat.java test ffi-cuda hat.test.TestBFloat16Type
> 
> 
> Some tests are expecting to fail due to precision error. We will need to improve the type conversion with round-to-nearest-even for example.

Juan Fumero has updated the pull request incrementally with one additional commit since the last revision:

  Use built-ins to process bfloat16 in the OpenCL backend

-------------

Changes:
  - all: https://git.openjdk.org/babylon/pull/716/files
  - new: https://git.openjdk.org/babylon/pull/716/files/ddc932c3..df55159c

Webrevs:
 - full: https://webrevs.openjdk.org/?repo=babylon&pr=716&range=02
 - incr: https://webrevs.openjdk.org/?repo=babylon&pr=716&range=01-02

  Stats: 118 lines in 3 files changed: 93 ins; 20 del; 5 mod
  Patch: https://git.openjdk.org/babylon/pull/716.diff
  Fetch: git fetch https://git.openjdk.org/babylon.git pull/716/head:pull/716

PR: https://git.openjdk.org/babylon/pull/716


More information about the babylon-dev mailing list