RFR: 8334431: C2 SuperWord: fix performance regression due to store-to-load-forwarding failures [v2]
Emanuel Peter
epeter at openjdk.org
Mon Nov 18 08:04:35 UTC 2024
> **History**
> This issue became apparent with https://github.com/openjdk/jdk/pull/21521 / [JDK-8325155](https://bugs.openjdk.org/browse/JDK-8325155):
> On machines that do not support sha intrinsics, we execute the sha code in java code. This java code has a loop that previously did not vectorize, but it now does since https://github.com/openjdk/jdk/pull/21521 / [JDK-8325155](https://bugs.openjdk.org/browse/JDK-8325155). It turns out that that kind of loop is actually slower when vectorized - this led to a regression, reported originally as:
> `8334431: Regression 18-20% on Mac x64 on Crypto.signverify`
>
> I then investigated the issue thoroughly, and discovered that it was even an issue before https://github.com/openjdk/jdk/pull/21521 / [JDK-8325155](https://bugs.openjdk.org/browse/JDK-8325155). I wrote a [blog-post ](https://eme64.github.io/blog/2024/06/24/Auto-Vectorization-and-Store-to-Load-Forwarding.html) about the issue.
>
> **Summary of Problem**
>
> As described in the [blog-post ](https://eme64.github.io/blog/2024/06/24/Auto-Vectorization-and-Store-to-Load-Forwarding.html), vectorization can introduce store-to-load failures that were not present in the scalar loop code. Where in scalar code, the loads and stores were all exactly overlapping or non-overlapping, in vectorized code they can now be partially overlapping. When a store and a later load are partially overlapping, the store value cannot be directly forwarded from the store-buffer to the load (would be fast), but has to first go to L1 cache. This incurs a higher latency on the dependency edge from the store to the load.
>
> **Benchmark**
>
> I introduced a new micro-benchmark in https://github.com/openjdk/jdk/pull/19880, and now further expanded it in this PR. You can see the extensive results in [this comment below](https://github.com/openjdk/jdk/pull/21521#issuecomment-2458938698).
>
> The benchmarks look different on different machines, but they all have a pattern similar to this:
> 
> 
> 
> 
>
> We see that the `scalar` loop is faster for low `offset`, and the `vectorized` loop is faster for high offsets (and power-of-w offsets).
>
> The reason is that for low offsets, th...
Emanuel Peter has updated the pull request with a new target base due to a merge or a rebase. The pull request now contains 25 commits:
- manual merge
- Merge branch 'master' into JDK-8334431-V-store-to-load-forwarding
- Merge branch 'master' into JDK-8334431-V-store-to-load-forwarding
- fix whitespace
- fix tests and build
- fix store-to-load forward IR rules
- updates before the weekend ... who knows if they are any good
- refactor to iteration threshold
- use jvmArgs again, and apply same fix as 8343345
- revert to jvmArgsPrepend
- ... and 15 more: https://git.openjdk.org/jdk/compare/543e355b...000f9f13
-------------
Changes: https://git.openjdk.org/jdk/pull/21521/files
Webrev: https://webrevs.openjdk.org/?repo=jdk&pr=21521&range=01
Stats: 4386 lines in 17 files changed: 4324 ins; 4 del; 58 mod
Patch: https://git.openjdk.org/jdk/pull/21521.diff
Fetch: git fetch https://git.openjdk.org/jdk.git pull/21521/head:pull/21521
PR: https://git.openjdk.org/jdk/pull/21521
More information about the hotspot-compiler-dev
mailing list