[code-reflection] Integrated: [hat] Proposal for bfloat16

Juan Fumero jfumero at openjdk.org
Wed Dec 3 16:49:37 UTC 2025


On Fri, 28 Nov 2025 15:18:43 GMT, Juan Fumero <jfumero at openjdk.org> wrote:

> This PR introduces the type [`bfloat16`](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus) for HAT.
> 
> Testing for OpenCL:
> 
> 
> HAT=SHOW_CODE java -cp hat/job.jar hat.java test ffi-opencl hat.test.TestBFloat16Type
> 
> 
> Testing for CUDA:
> 
> 
> HAT=SHOW_CODE java -cp hat/job.jar hat.java test ffi-cuda hat.test.TestBFloat16Type
> 
> 
> Some tests are expecting to fail due to precision error. We will need to improve the type conversion with round-to-nearest-even for example.

This pull request has now been integrated.

Changeset: dedbd84e
Author:    Juan Fumero <jfumero at openjdk.org>
URL:       https://git.openjdk.org/babylon/commit/dedbd84edeed30a5e2b3993bc1c6aee43231340e
Stats:     1848 lines in 37 files changed: 1611 ins; 126 del; 111 mod

[hat] Proposal for bfloat16

-------------

PR: https://git.openjdk.org/babylon/pull/716


More information about the babylon-dev mailing list