RFR: 8307513: C2: intrinsify Math.max(long,long) and Math.min(long,long) [v4]
Galder Zamarreño
galder at openjdk.org
Tue Dec 17 16:42:48 UTC 2024
On Fri, 29 Nov 2024 11:26:10 GMT, Emanuel Peter <epeter at openjdk.org> wrote:
>> Galder Zamarreño has updated the pull request with a new target base due to a merge or a rebase. The incremental webrev excludes the unrelated changes brought in by the merge/rebase. The pull request contains 30 additional commits since the last revision:
>>
>> - Use same default size as in other vector reduction benchmarks
>> - Renamed benchmark class
>> - Double/Float tests only when avx enabled
>> - Make state class non-final
>> - Restore previous benchmark iterations and default param size
>> - Add clipping range benchmark that uses min/max
>> - Encapsulate benchmark state within an inner class
>> - Avoid creating result array in benchmark method
>> - Merge branch 'master' into topic.intrinsify-max-min-long
>> - Revert "Implement cmovL as a jump+mov branch"
>>
>> This reverts commit 1522e26bf66c47b780ebd0d0d0c4f78a4c564e44.
>> - ... and 20 more: https://git.openjdk.org/jdk/compare/b290c6e3...0a8718e1
>
> test/hotspot/jtreg/compiler/intrinsics/math/TestMinMaxInlining.java line 80:
>
>> 78: @IR(phase = { CompilePhase.BEFORE_MACRO_EXPANSION }, counts = { IRNode.MIN_L, "1" })
>> 79: @IR(phase = { CompilePhase.AFTER_MACRO_EXPANSION }, counts = { IRNode.MIN_L, "0" })
>> 80: private static long testLongMin(long a, long b) {
>
> Can you add a comment why it disappears after macro expansion?
Good question. On non-avx512 machines after macro expansion the min/max nodes become cmov nodes, but but that's not the full story because on avx512 machines, they become minV/maxV nodes. Would you tweak the `@IR` annotations to capture this? Or would you leave it just as a comment?
-------------
PR Review Comment: https://git.openjdk.org/jdk/pull/20098#discussion_r1888858177
More information about the core-libs-dev
mailing list