RFR: 8342103: C2 compiler support for Float16 type and associated scalar operations [v2]

Jatin Bhateja jbhateja at openjdk.org
Mon Dec 16 08:35:33 UTC 2024


On Mon, 16 Dec 2024 07:22:04 GMT, Emanuel Peter <epeter at openjdk.org> wrote:

> Can you quickly summarize what tests you have, and what they test?

Patch includes functional and performance tests, as per your suggestions IR framework-based tests now cover various special cases for constant folding transformation.  Let me know if you see any gaps.

> test/hotspot/jtreg/compiler/vectorization/TestFloat16VectorConvChain.java line 49:
> 
>> 47:         counts = {IRNode.VECTOR_CAST_HF2F, IRNode.VECTOR_SIZE_ANY, ">= 1", IRNode.VECTOR_CAST_F2HF, IRNode.VECTOR_SIZE_ANY, " >= 1"})
>> 48:     @IR(applyIfCPUFeatureAnd = {"avx512_fp16", "false", "zvfh", "true"},
>> 49:         counts = {IRNode.VECTOR_CAST_HF2F, IRNode.VECTOR_SIZE_ANY, ">= 1", IRNode.VECTOR_CAST_F2HF, IRNode.VECTOR_SIZE_ANY, " >= 1"})
> 
> Looks like this is having vector changes?
> And this is pre-existing: but why are we using `VECTOR_SIZE_ANY` here? Can we not know the vector size? Maybe we can introduce a new tag `max_float16` or `max_hf`. And do something like this:
> `IRNode.VECTOR_SIZE + "min(max_float, max_hf)", "> 0"`
> 
> The downside with using `ANY` is that the exact size is not tested, and that might mean that the size is much smaller than ideal.

Hi @eme64 , Test modification looks ok to me, we intend to trigger these IR rules on non AVX512-FP16 targets.
On AVX512-FP16 target compiler will infer scalar float16 add operation which will not get auto-vectorized.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/22754#issuecomment-2544914959
PR Review Comment: https://git.openjdk.org/jdk/pull/22754#discussion_r1886373922


More information about the hotspot-compiler-dev mailing list