RFR: 8341137: Optimize long vector multiplication using x86 VPMULUDQ instruction [v2]
Quan Anh Mai
qamai at openjdk.org
Fri Oct 11 16:57:13 UTC 2024
On Wed, 9 Oct 2024 09:59:11 GMT, Jatin Bhateja <jbhateja at openjdk.org> wrote:
>> This patch optimizes LongVector multiplication by inferring VPMULUDQ instruction for following IR pallets.
>>
>>
>> MulL ( And SRC1, 0xFFFFFFFF) ( And SRC2, 0xFFFFFFFF)
>> MulL (URShift SRC1 , 32) (URShift SRC2, 32)
>> MulL (URShift SRC1 , 32) ( And SRC2, 0xFFFFFFFF)
>> MulL ( And SRC1, 0xFFFFFFFF) (URShift SRC2 , 32)
>>
>>
>>
>> A 64x64 bit multiplication produces 128 bit result, and can be performed by individually multiplying upper and lower double word of multiplier with multiplicand and assembling the partial products to compute full width result. Targets supporting vector quadword multiplication have separate instructions to compute upper and lower quadwords for 128 bit result. Therefore existing VectorAPI multiplication operator expects shape conformance between source and result vectors.
>>
>> If upper 32 bits of quadword multiplier and multiplicand is always set to zero then result of multiplication is only dependent on the partial product of their lower double words and can be performed using unsigned 32 bit multiplication instruction with quadword saturation. Patch matches this pattern in a target dependent manner without introducing new IR node.
>>
>> VPMULUDQ instruction performs unsigned multiplication between even numbered doubleword lanes of two long vectors and produces 64 bit result. It has much lower latency compared to full 64 bit multiplication instruction "VPMULLQ", in addition non-AVX512DQ targets does not support direct quadword multiplication, thus we can save redundant partial product for zeroed out upper 32 bits. This results into throughput improvements on both P and E core Xeons.
>>
>> Please find below the performance of [XXH3 hashing benchmark ](https://mail.openjdk.org/pipermail/panama-dev/2024-July/020557.html)included with the patch:-
>>
>>
>> Sierra Forest :-
>> ============
>> Baseline:-
>> Benchmark (SIZE) Mode Cnt Score Error Units
>> VectorXXH3HashingBenchmark.hashingKernel 1024 thrpt 2 806.228 ops/ms
>> VectorXXH3HashingBenchmark.hashingKernel 2048 thrpt 2 403.044 ops/ms
>> VectorXXH3HashingBenchmark.hashingKernel 4096 thrpt 2 200.641 ops/ms
>> VectorXXH3HashingBenchmark.hashingKernel 8192 thrpt 2 100.664 ops/ms
>>
>> With Optimization:-
>> Benchmark (SIZE) Mode Cnt Score Error Units
>> VectorXXH3HashingBenchmark.hashingKernel ...
>
> Jatin Bhateja has updated the pull request with a new target base due to a merge or a rebase. The pull request now contains two commits:
>
> - Merge branch 'master' of http://github.com/openjdk/jdk into JDK-8341137
> - 8341137: Optimize long vector multiplication using x86 VPMULUDQ instruction
Another approach is to do similarly to `MacroLogicVNode`. You can make another node and transform `MulVL` to it before matching, this is more flexible than using match rules. I am having a similar idea that is to group those transformations together into a `Phase` called `PhaseLowering`. It can be used to do e.g split `ExtractI` into the 128-bit lane extraction and the element extraction from that lane. This allows us to do `GVN` on those and `v.lane(5) + v.lane(7)` can be compiled nicely as:
vextracti128 xmm0, ymm1, 1
pextrd eax, xmm0, 1
// vextracti128 xmm0, ymm1, 1 here will be gvn-ed
pextrd ecx, xmm0, 3
add eax, ecx
-------------
PR Comment: https://git.openjdk.org/jdk/pull/21244#issuecomment-2407793168
More information about the hotspot-compiler-dev
mailing list