RFR: 8343689: AArch64: Optimize MulReduction implementation [v3]
Xiaohong Gong
xgong at openjdk.org
Thu Feb 27 03:31:57 UTC 2025
On Wed, 26 Feb 2025 14:54:45 GMT, Mikhail Ablakatov <mablakatov at openjdk.org> wrote:
>> Add a reduce_mul intrinsic SVE specialization for >= 256-bit long vectors. It multiplies halves of the source vector using SVE instructions to get to a 128-bit long vector that fits into a SIMD&FP register. After that point, existing ASIMD implementation is used.
>>
>> Nothing changes for <= 128-bit long vectors as for those the existing ASIMD implementation is used directly still.
>>
>> The benchmarks below are from [panama-vector/vectorIntrinsics:test/micro/org/openjdk/bench/jdk/incubator/vector/operation](https://github.com/openjdk/panama-vector/tree/vectorIntrinsics/test/micro/org/openjdk/bench/jdk/incubator/vector/operation). To the best of my knowledge, openjdk/jdk is missing VectorAPI reducion micro-benchmarks.
>>
>> Benchmarks results:
>>
>> Neoverse-V1 (SVE 256-bit)
>>
>> Benchmark (size) Mode master PR Units
>> ByteMaxVector.MULLanes 1024 thrpt 5447.643 11455.535 ops/ms
>> ShortMaxVector.MULLanes 1024 thrpt 3388.183 7144.301 ops/ms
>> IntMaxVector.MULLanes 1024 thrpt 3010.974 4911.485 ops/ms
>> LongMaxVector.MULLanes 1024 thrpt 1539.137 2562.835 ops/ms
>> FloatMaxVector.MULLanes 1024 thrpt 1355.551 4158.128 ops/ms
>> DoubleMaxVector.MULLanes 1024 thrpt 1715.854 3284.189 ops/ms
>>
>>
>> Fujitsu A64FX (SVE 512-bit):
>>
>> Benchmark (size) Mode master PR Units
>> ByteMaxVector.MULLanes 1024 thrpt 1091.692 2887.798 ops/ms
>> ShortMaxVector.MULLanes 1024 thrpt 597.008 1863.338 ops/ms
>> IntMaxVector.MULLanes 1024 thrpt 510.642 1348.651 ops/ms
>> LongMaxVector.MULLanes 1024 thrpt 468.878 878.620 ops/ms
>> FloatMaxVector.MULLanes 1024 thrpt 376.284 2237.564 ops/ms
>> DoubleMaxVector.MULLanes 1024 thrpt 431.343 1646.792 ops/ms
>
> Mikhail Ablakatov has updated the pull request incrementally with two additional commits since the last revision:
>
> - fixup: don't modify the value in vsrc
>
> Fix reduce_mul_integral_gt128b() so it doesn't modify vsrc. With this
> change, the result of recursive folding is held in vtmp1. To be able to
> pass this intermediate result to reduce_mul_integral_le128b(), we would
> have to use another temporary FloatRegister, as vtmp1 would essentially
> act as vsrc. It's possible to get around this however:
> reduce_mul_integral_le128b() is modified so it's possible to pass
> matching vsrc and vtmp2 arguments. By doing this, we save ourselves a
> temporary register in rules that match to reduce_mul_integral_gt128b().
> - cleanup: revert an unnecessary change to reduce_mul_fp_le128b() formating
src/hotspot/cpu/aarch64/aarch64_vector.ad line 3012:
> 3010: vReg tmp1, vReg tmp2) %{
> 3011: predicate(Matcher::vector_length_in_bytes(n->in(2)) == 8 ||
> 3012: Matcher::vector_length_in_bytes(n->in(2)) == 16);
Suggestion:
predicate(Matcher::vector_length_in_bytes(n->in(2)) <= 16);
src/hotspot/cpu/aarch64/c2_MacroAssembler_aarch64.cpp line 2113:
> 2111: while (vector_length_in_bytes > FloatRegister::neon_vl) {
> 2112: do_recursive_folding_iteration(vtmp1, vtmp1, vtmp2);
> 2113: }
Looks a little complex. Could we just simplify with following change? BTW, `sve_movprfx` inside of the loop can be saved. Please correct me if any mis-understanding!
Suggestion:
sve_movprfx(vtmp2, vsrc);
while (vector_length_in_bytes > FloatRegister::neon_vl) {
unsigned vector_length = vector_length_in_bytes / type2aelembytes(bt);
sve_gen_mask_imm(pgtmp, bt, vector_length / 2);
// Shuffle the upper half elements of the register to the right.
sve_ext(vtmp1, vtmp2, vector_length_in_bytes / 2);
sve_mul(vtmp2, elemType_to_regVariant(bt), pgtmp, vtmp1);
vector_length_in_bytes = vector_length_in_bytes / 2;
}
-------------
PR Review Comment: https://git.openjdk.org/jdk/pull/23181#discussion_r1972770786
PR Review Comment: https://git.openjdk.org/jdk/pull/23181#discussion_r1972768005
More information about the hotspot-compiler-dev
mailing list