RFR: 8282966: AArch64: Optimize VectorMask.toLong with SVE2
Xiaohong Gong
xgong at openjdk.java.net
Sun Apr 24 02:27:40 UTC 2022
On Thu, 21 Apr 2022 12:17:57 GMT, Eric Liu <eliu at openjdk.org> wrote:
> This patch optimizes the backend implementation of VectorMaskToLong for
> AArch64, given a more efficient approach to mov value bits from
> predicate register to general purpose register as x86 PMOVMSK[1] does,
> by using BEXT[2] which is available in SVE2.
>
> With this patch, the final code (input mask is byte type with
> SPECIESE_512, generated on an SVE vector reg size of 512-bit QEMU
> emulator) changes as below:
>
> Before:
>
> mov z16.b, p0/z, #1
> fmov x0, d16
> orr x0, x0, x0, lsr #7
> orr x0, x0, x0, lsr #14
> orr x0, x0, x0, lsr #28
> and x0, x0, #0xff
> fmov x8, v16.d[1]
> orr x8, x8, x8, lsr #7
> orr x8, x8, x8, lsr #14
> orr x8, x8, x8, lsr #28
> and x8, x8, #0xff
> orr x0, x0, x8, lsl #8
>
> orr x8, xzr, #0x2
> whilele p1.d, xzr, x8
> lastb x8, p1, z16.d
> orr x8, x8, x8, lsr #7
> orr x8, x8, x8, lsr #14
> orr x8, x8, x8, lsr #28
> and x8, x8, #0xff
> orr x0, x0, x8, lsl #16
>
> orr x8, xzr, #0x3
> whilele p1.d, xzr, x8
> lastb x8, p1, z16.d
> orr x8, x8, x8, lsr #7
> orr x8, x8, x8, lsr #14
> orr x8, x8, x8, lsr #28
> and x8, x8, #0xff
> orr x0, x0, x8, lsl #24
>
> orr x8, xzr, #0x4
> whilele p1.d, xzr, x8
> lastb x8, p1, z16.d
> orr x8, x8, x8, lsr #7
> orr x8, x8, x8, lsr #14
> orr x8, x8, x8, lsr #28
> and x8, x8, #0xff
> orr x0, x0, x8, lsl #32
>
> mov x8, #0x5
> whilele p1.d, xzr, x8
> lastb x8, p1, z16.d
> orr x8, x8, x8, lsr #7
> orr x8, x8, x8, lsr #14
> orr x8, x8, x8, lsr #28
> and x8, x8, #0xff
> orr x0, x0, x8, lsl #40
>
> orr x8, xzr, #0x6
> whilele p1.d, xzr, x8
> lastb x8, p1, z16.d
> orr x8, x8, x8, lsr #7
> orr x8, x8, x8, lsr #14
> orr x8, x8, x8, lsr #28
> and x8, x8, #0xff
> orr x0, x0, x8, lsl #48
>
> orr x8, xzr, #0x7
> whilele p1.d, xzr, x8
> lastb x8, p1, z16.d
> orr x8, x8, x8, lsr #7
> orr x8, x8, x8, lsr #14
> orr x8, x8, x8, lsr #28
> and x8, x8, #0xff
> orr x0, x0, x8, lsl #56
>
> After:
>
> mov z16.b, p0/z, #1
> mov z17.b, #1
> bext z16.d, z16.d, z17.d
> mov z17.d, #0
> uzp1 z16.s, z16.s, z17.s
> uzp1 z16.h, z16.h, z17.h
> uzp1 z16.b, z16.b, z17.b
> mov x0, v16.d[0]
>
> [1] https://www.felixcloutier.com/x86/pmovmskb
> [2] https://developer.arm.com/documentation/ddi0602/2020-12/SVE-Instructions/BEXT--Gather-lower-bits-from-positions-selected-by-bitmask-
src/hotspot/cpu/aarch64/assembler_aarch64.hpp line 3758:
> 3756: assert(T != Q, "invalid size"); \
> 3757: f(0b01000101, 31, 24), f(T, 23, 22), f(0b0, 21); \
> 3758: rf(Zm, 16), f(0b1011, 15, 12); f(opc, 11, 10); \
To align with other code styles, could you please use "," after `f(0b1011, 15, 12)` instead of `;` although both work well?
src/hotspot/cpu/aarch64/c2_MacroAssembler_aarch64.cpp line 961:
> 959: // Pack the lowest-numbered bit of each mask element in src into a long value
> 960: // in dst, at most the first 64 lane elements.
> 961: // pgtmp would not be used if UseSVE=2 and the hardware supports FEAT_BITPERM.
`UseSVE == 2` instead of `UseSVE=2` ?
src/hotspot/cpu/aarch64/c2_MacroAssembler_aarch64.cpp line 962:
> 960: // in dst, at most the first 64 lane elements.
> 961: // pgtmp would not be used if UseSVE=2 and the hardware supports FEAT_BITPERM.
> 962: // Clobbers: rscratch1 if hardware not supports FEAT_BITPERM.
`Clobbers: rscratch1 if hardware does not support FEAT_BITPERM` ?
-------------
PR: https://git.openjdk.java.net/jdk/pull/8337
More information about the hotspot-compiler-dev
mailing list