RFR: 8205051: Poor Performance with UseNUMA when cpu and memory nodes are misaligned [v5]
Derek White
drwhite at openjdk.org
Mon Jan 6 12:48:48 UTC 2025
On Mon, 16 Dec 2024 11:45:14 GMT, Swati Sharma <duke at openjdk.org> wrote:
>> Hi All,
>>
>> The PR handles the performance issues related to flag UseNUMA. We disable the UseNUMA flag when the process gets invoked with incorrect node alignment.
>> We check the cpunodebind and membind(or interleave for interleave policy) bitmask equality and disable UseNUMA when they are not equal.
>> For example on a 4 NUMA node system:
>> 0123 Node Number
>> 1100 cpunodebind bitmask
>> 1111 membind bitmask
>> Disable UseNUMA as CPU and memory bitmask are not equal.
>>
>> 0123 Node Number
>> 1100 cpunodebind bitmask
>> 1100 membind bitmask
>> Enable UseNUMA as CPU and memory bitmask are equal.
>>
>> This covers all the cases with all policies and tested this with below command
>> numactl --cpunodebind=0,1 --localalloc java -Xlog:gc*=info -XX:+UseParallelGC -XX:+UseNUMA -version
>>
>> For localalloc and preferred policies the membind bitmask returns true for all nodes, hence if cpunodebind is not bound to all nodes then the UseNUMA will be disabled.
>>
>> This PR covers disabling the UseNUMA flag for all GC's hence we observed an improvement of ~25% on G1GC , ~20% on ZGC and ~7-8% on PGC in both throughput and latency on SPECjbb2015 on a 2 NUMA node SRF-SP system with 6Group configuration.
>>
>> Please review and provide your valuable comments.
>>
>> Thanks,
>> Swati Sharma
>> Intel
>
> Swati Sharma has updated the pull request incrementally with one additional commit since the last revision:
>
> 8205051: Minor comment change
Hi David, sorry missed that.
I think the fix is to simply change the "log_warning" calls to "log_info". We'll get a patch out under [JDK-8346834](https://bugs.openjdk.org/browse/JDK-8346834) after testing.
Thanks for everyone's comments on this.
[Ironically this whole adventure started because of a mismatch between the documentation and implementation of the "localalloc" option in numactl. Specifically if "localalloc" meant "Always allocate on the current node" as documented (until 2020) or make a best effort. In [2020](https://github.com/numactl/numactl/pull/88) they updated the docs to say localalloc meant "best effort". So it's especially embarrassing I didn't check how UseNUMA was documented in the code!]
-------------
PR Comment: https://git.openjdk.org/jdk/pull/22395#issuecomment-2573040822
More information about the hotspot-runtime-dev
mailing list