RFR: 8286941: Add mask IR for partial vector operations for ARM SVE

Vladimir Kozlov kvn at openjdk.java.net
Wed Jun 8 02:29:43 UTC 2022


On Mon, 6 Jun 2022 09:42:02 GMT, Xiaohong Gong <xgong at openjdk.org> wrote:

> VectorAPI SVE backend supports vector operations whose vector length is smaller than the max vector length that the current hardware can support. We call them partial vector operations. For some partial operations like vector load/store and the reductions, we need to generate a mask based on the real vector length and use it to control the operations to make sure the results are correct.
> 
> For example, if the user defines an IntVector with 256-bit species, and runs it on a SVE hardware that supports 512-bit as the max vector size, all the 256-bit int vector operations are partial. And a mask that all the higher lanes than the real vector length are set to 0 is generated for some ops.
> 
> Currently the mask is generated in the backend that is together with the code generation for each op in the match rule. This will generate many duplicate instructions for operations that have the same vector type. Besides, the mask generation is loop invariant which could be hoisted outside of the loop.
> 
> Here is an example for vector load and add reduction inside a loop:
> 
>   ptrue   p0.s, vl8             ; mask generation
>   ld1w    {z16.s}, p0/z, [x14]  ; load vector
> 
>   ptrue   p0.s, vl8             ; mask generation
>   uaddv   d17, p0, z16.s        ; add reduction
>   smov    x14, v17.s[0]
> 
> As we can see the mask generation code "`ptrue`" is duplicated. To improve it, this patch generates the mask IR and adds it to the partial vector ops before code generation. The duplicate mask generation instructions can be optimized out by gvn and hoisted outside of the loop.
> 
> Note that for masked vector operations, there is no need to generate additional mask even though the vector length is smaller than the max vector register size, as the original higher input mask bits have been cleared out.
> 
> Here is the performance gain for the 256-bit vector reductions work on an SVE 512-bit system:
> 
>   Benchmark                  size   Gain
>   Byte256Vector.ADDLanes     1024   0.999
>   Byte256Vector.ANDLanes     1024   1.065
>   Byte256Vector.MAXLanes     1024   1.064
>   Byte256Vector.MINLanes     1024   1.062
>   Byte256Vector.ORLanes      1024   1.072
>   Byte256Vector.XORLanes     1024   1.041
>   Short256Vector.ADDLanes    1024   1.017
>   Short256Vector.ANDLanes    1024   1.044
>   Short256Vector.MAXLanes    1024   1.049
>   Short256Vector.MINLanes    1024   1.049
>   Short256Vector.ORLanes     1024   1.089
>   Short256Vector.XORLanes    1024   1.047
>   Int256Vector.ADDLanes      1024   1.045
>   Int256Vector.ANDLanes      1024   1.078
>   Int256Vector.MAXLanes      1024   1.123
>   Int256Vector.MINLanes      1024   1.129
>   Int256Vector.ORLanes       1024   1.078
>   Int256Vector.XORLanes      1024   1.072
>   Long256Vector.ADDLanes     1024   1.059
>   Long256Vector.ANDLanes     1024   1.101
>   Long256Vector.MAXLanes     1024   1.079
>   Long256Vector.MINLanes     1024   1.099
>   Long256Vector.ORLanes      1024   1.098
>   Long256Vector.XORLanes     1024   1.110
>   Float256Vector.ADDLanes    1024   1.033
>   Float256Vector.MAXLanes    1024   1.156
>   Float256Vector.MINLanes    1024   1.151
>   Double256Vector.ADDLanes   1024   1.062
>   Double256Vector.MAXLanes   1024   1.145
>   Double256Vector.MINLanes   1024   1.140
> 
> This patch also adds 32-bit variants of SVE whileXX instruction with one more matching rule of `VectorMaskGen (ConvI2L src)`. So after this patch, we save one `sxtw` instruction for most VectorMaskGen cases, like below:
> 
>   sxtw    x14, w14
>   whilelo p0.s, xzr, x14  =>  whilelo p0.s, wzr, w14

src/hotspot/share/opto/matcher.cpp line 2255:

> 2253:     case Op_FmaVF:
> 2254:     case Op_MacroLogicV:
> 2255:     case Op_LoadVectorMasked:

Why it is removed?

src/hotspot/share/opto/vectornode.cpp line 868:

> 866:   default:
> 867:     node->add_req(mask);
> 868:     node->add_flag(Node::Flag_is_predicated_vector);

Add assert that only VectorMaskOpNode and ReductionNode expected here.

src/hotspot/share/opto/vectornode.cpp line 951:

> 949: 
> 950: Node* LoadVectorNode::Ideal(PhaseGVN* phase, bool can_reshape) {
> 951:   const TypeVect* vt = as_LoadVector()->vect_type();

Why you need `as_LoadVector()` fro `this`? Same in `StoreVectorNode::Ideal().

src/hotspot/share/opto/vectornode.cpp line 988:

> 986:     }
> 987:   }
> 988:   return LoadNode::Ideal(phase, can_reshape);

Should this call `LoadVectorNode::Ideal`?
I understand you did optimization because `vector_needs_partial_operations` is false for `LoadVectorMaskedNode` in aarch64 case. But what if it is different on some other (not current) platform?

src/hotspot/share/opto/vectornode.cpp line 1008:

> 1006:     }
> 1007:   }
> 1008:   return StoreNode::Ideal(phase, can_reshape);

Should this call `StoreVectorNode::Ideal`?

src/hotspot/share/opto/vectornode.cpp line 1821:

> 1819:   // Transform (MaskAll m1 (VectorMaskGen len)) ==> (VectorMaskGen len)
> 1820:   // if the vector length in bytes is lower than the MaxVectorSize.
> 1821:   if (is_con_M1(in(1)) && length_in_bytes() < MaxVectorSize) {

Due to #8877 such length check may not correct here.
And I don't see `in(2)->Opcode() == Op_VectorMaskGen` check.

-------------

PR: https://git.openjdk.java.net/jdk/pull/9037


More information about the hotspot-compiler-dev mailing list