Withdrawn: 8343689: AArch64: Optimize MulReduction implementation
duke
duke at openjdk.org
Thu Jan 15 19:48:37 UTC 2026
On Fri, 17 Jan 2025 19:35:44 GMT, Mikhail Ablakatov <mablakatov at openjdk.org> wrote:
> Add a reduce_mul intrinsic SVE specialization for >= 256-bit long vectors. It multiplies halves of the source vector using SVE instructions to get to a 128-bit long vector that fits into a SIMD&FP register. After that point, existing ASIMD implementation is used.
>
> Nothing changes for <= 128-bit long vectors as for those the existing ASIMD implementation is used directly still.
>
> The benchmarks below are from [panama-vector/vectorIntrinsics:test/micro/org/openjdk/bench/jdk/incubator/vector/operation](https://github.com/openjdk/panama-vector/tree/vectorIntrinsics/test/micro/org/openjdk/bench/jdk/incubator/vector/operation). To the best of my knowledge, openjdk/jdk is missing VectorAPI reducion micro-benchmarks.
>
> Benchmarks results:
>
> Neoverse-V1 (SVE 256-bit)
>
> Benchmark (size) Mode master PR Units
> ByteMaxVector.MULLanes 1024 thrpt 5447.643 11455.535 ops/ms
> ShortMaxVector.MULLanes 1024 thrpt 3388.183 7144.301 ops/ms
> IntMaxVector.MULLanes 1024 thrpt 3010.974 4911.485 ops/ms
> LongMaxVector.MULLanes 1024 thrpt 1539.137 2562.835 ops/ms
> FloatMaxVector.MULLanes 1024 thrpt 1355.551 4158.128 ops/ms
> DoubleMaxVector.MULLanes 1024 thrpt 1715.854 3284.189 ops/ms
>
>
> Fujitsu A64FX (SVE 512-bit):
>
> Benchmark (size) Mode master PR Units
> ByteMaxVector.MULLanes 1024 thrpt 1091.692 2887.798 ops/ms
> ShortMaxVector.MULLanes 1024 thrpt 597.008 1863.338 ops/ms
> IntMaxVector.MULLanes 1024 thrpt 510.642 1348.651 ops/ms
> LongMaxVector.MULLanes 1024 thrpt 468.878 878.620 ops/ms
> FloatMaxVector.MULLanes 1024 thrpt 376.284 2237.564 ops/ms
> DoubleMaxVector.MULLanes 1024 thrpt 431.343 1646.792 ops/ms
This pull request has been closed without being integrated.
-------------
PR: https://git.openjdk.org/jdk/pull/23181
More information about the hotspot-compiler-dev
mailing list