RFR: 8326962: C2 SuperWord: cache VPointer
Emanuel Peter
epeter at openjdk.org
Tue Apr 2 15:45:11 UTC 2024
On Tue, 2 Apr 2024 13:59:46 GMT, Christian Hagedorn <chagedorn at openjdk.org> wrote:
>> This is a subtask of [JDK-8315361](https://bugs.openjdk.org/browse/JDK-8315361).
>>
>> Parsing `VPointer` currently happens all over SuperWord. And often in quadratic loops, where we compare all-with-all loads/stores.
>>
>> I propose to cache the `VPointer`s, then we can do a constant-time cache lookup rather than parsing the pointer subgraph every time.
>>
>> There are now only a few cases where we cannot use the cached `VPointer`:
>> - `SuperWord::unrolling_analysis`: we have no `VLoopAnalyzer`, and so no submodules like `VLoopPointers`. We don't need to cache, since we only iterate over the loop body once, and create only a single `VPointer` per memop.
>> - `SuperWord::output`: when we have a `Load`, and try to bypass `StoreVector` nodes. The `StoreVector` nodes are new, and so we have no cached `VPointer` for them. This could be fixed somehow, but I don't want to deal with it now. I intend to refactor `SuperWord::output` soon, and can look into options at that point (either I bypass before we insert the vector nodes, or I remember what scalar memop the vector was created from, and then get the cached pointer this way).
>>
>> This changeset is also a preparation step for [JDK-8325155](https://bugs.openjdk.org/browse/JDK-8325155). I will have a list of pointers, and sort them such that creating adjacent refs is much more efficient.
>>
>> **Benchmarking SuperWord Compile Time**
>>
>> I use the same benchmark from https://github.com/openjdk/jdk/pull/18532.
>>
>> On master:
>>
>> C2 Compile Time: 56.816 s
>> IdealLoop: 56.604 s
>> AutoVectorize: 56.192 s
>>
>>
>> With this patch:
>>
>> C2 Compile Time: 49.719 s
>> IdealLoop: 49.509 s
>> AutoVectorize: 49.106 s
>>
>>
>> This saves us about `7 sec`, which is significant. I will have to see what it effect it has once we also apply https://github.com/openjdk/jdk/pull/18532, but I think the combined effect will be very significant.
>
> src/hotspot/share/opto/vectorization.cpp line 224:
>
>> 222: for (int i = 0; i < _body.body().length(); i++) {
>> 223: MemNode* mem = _body.body().at(i)->isa_Mem();
>> 224: if (mem != nullptr && _vloop.in_bb(mem)) {
>
> I see that you use this pattern twice. Maybe we could provide a "for_each_mem(lambda)" in `VLoopBody`? But could also be done separately.
I was considering it. I can do that.
-------------
PR Review Comment: https://git.openjdk.org/jdk/pull/18577#discussion_r1548132492
More information about the hotspot-compiler-dev
mailing list