[code-reflection] Withdrawn: [hat] ComputeRange and ThreadMesh API for defining 1D, 2D and 3D Ranges
Juan Fumero
duke at openjdk.org
Fri Aug 8 14:04:40 UTC 2025
On Thu, 7 Aug 2025 15:20:29 GMT, Juan Fumero <duke at openjdk.org> wrote:
> This PR proposes an extension of the HAT API to leverage 1D, 2D and 3D ranges for the compute context dispatch.
> A `ComputeRange` is an entity that holds global and local thread mesh. In the future, we can add offsets to it.
>
> Each `ThreadMesh` is a triplet representing the number of threads for x,y, and z dimensions.
>
> How to dispatch 1D kernels?
>
>
> ComputeRange range1D = new ComputeRange(new ThreadMesh(size));
> cc.dispatchKernel(computeRange,
> kc -> myKernel(...));
>
>
> How to dispatch 2D kernels?
>
>
> ComputeRange range2D = new ComputeRange(new ThreadMesh(size, size));
> cc.dispatchKernel(computeRange,
> kc -> my2DKernel(...));
>
>
> How to enable local mesh?
>
> We pass a second parameter to the ComputeRange constructor to define local mesh. If it is not passed, then it is `null` and the HAT runtime can select a default set of values.
>
>
> ComputeRange computeRange = new ComputeRange(
> new ThreadMesh(globalSize, globalSize),
> new ThreadMesh(16, 16));
> cc.dispatchKernel(computeRange,
> kc -> matrixMultiplyKernel2D(kc, matrixA, matrixB, matrixC, globalSize)
> );
>
>
> In addition, this PR renames the `KernelContext` internal API to map the context ndrange object to native memory to `KernelBufferContext`.
>
>
> #### How to check?
>
>
> java @hat/run ffi-opencl matmul 1D
> java @hat/run ffi-opencl matmul 2D
>
> java @hat/run ffi-cuda matmul 1D
> java @hat/run ffi-cuda matmul 2D
This pull request has been closed without being integrated.
-------------
PR: https://git.openjdk.org/babylon/pull/515
More information about the babylon-dev
mailing list