RFR: 8366333: AArch64: Enhance SVE subword type implementation of vector compress [v4]

Jatin Bhateja jbhateja at openjdk.org
Tue Sep 30 07:02:50 UTC 2025


On Mon, 29 Sep 2025 08:00:13 GMT, erifan <duke at openjdk.org> wrote:

>> The AArch64 SVE and SVE2 architectures lack an instruction suitable for subword-type `compress` operations. Therefore, the current implementation uses the 32-bit SVE `compact` instruction to compress subword types by first widening the high and low parts to 32 bits, compressing them, and then narrowing them back to their original type. Finally, the high and low parts are merged using the `index + tbl` instructions.
>> 
>> This approach is significantly slower compared to architectures with native support. After evaluating all available AArch64 SVE instructions and experimenting with various implementations—such as looping over the active elements, extraction, and insertion—I confirmed that the existing algorithm is optimal given the instruction set. However, there is still room for optimization in the following two aspects:
>> 1. Merging with `index + tbl` is suboptimal due to the high latency of the `index` instruction.
>> 2. For partial subword types, operations to the highest half are unnecessary because those bits are invalid.
>> 
>> This pull request introduces the following changes:
>> 1. Replaces `index + tbl` with the `whilelt + splice` instructions, which offer lower latency and higher throughput.
>> 2. Eliminates unnecessary compress operations for partial subword type cases.
>> 3. For `sve_compress_byte`, one less temporary register is used to alleviate potential register pressure.
>> 
>> Benchmark results demonstrate that these changes significantly improve performance.
>> 
>> Benchmarks on Nvidia Grace machine with 128-bit SVE:
>> 
>> Benchmark	            Unit	Before	 Error	After	 Error	Uplift
>> Byte128Vector.compress	ops/ms	4846.97	 26.23	6638.56	 31.60	1.36
>> Byte64Vector.compress	ops/ms	2447.69	 12.95	7167.68	 34.49	2.92
>> Short128Vector.compress	ops/ms	7174.88	 40.94	8398.45	 9.48	1.17
>> Short64Vector.compress	ops/ms	3618.72	 3.04	8618.22	 10.91	2.38
>> 
>> 
>> This PR was tested on 128-bit, 256-bit, and 512-bit SVE environments, and all tests passed.
>
> erifan has updated the pull request incrementally with one additional commit since the last revision:
> 
>   Improve coding style a bit

test/hotspot/jtreg/compiler/vectorapi/VectorCompressTest.java line 170:

> 168:     @Test
> 169:     @IR(counts = { IRNode.COMPRESS_VB, "= 1" },
> 170:         applyIfCPUFeature = { "sve", "true" })

Hi @erifan,
Nice work!,
Can you please also enable these tests for x86? Following are the relevant features. 

CompressVB     -> avx512_vbmi2, avx512_vl
CompressVS     -> avx512_vbmi2. avx512_vl
CompressVI/VF -> avx512f, avx512vl
ComprssVL/VD -> avx512f, avx512vl

PS: avx512_vbmi2 is missing from test/IREncodingPrinter.java

FYI , currently, we don't support sub-word compression intrinsics on AVX2/E-core targets.  I created a vectorized algorithm without any x86 backend change just using vector APIs, and it showed 12x improvement.

https://github.com/jatin-bhateja/external_staging/blob/main/VectorizedAlgos/SubwordCompress/short_vector_compress.java


PROMPT>java -cp .  --add-modules=jdk.incubator.vector short_vector_compress 0
WARNING: Using incubator modules: jdk.incubator.vector
[ baseline time] 976 ms  [res] 429507073
PROMPT>java -cp .  --add-modules=jdk.incubator.vector short_vector_compress 1
WARNING: Using incubator modules: jdk.incubator.vector
[ withopt time] 80 ms  [res] 429507073
PROMPT>

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/27188#discussion_r2390066873


More information about the hotspot-compiler-dev mailing list