[ping] Re: [11] RFR(M): 8189922: UseNUMA memory interleaving vs membind
Gustavo Romero
gromero at linux.vnet.ibm.com
Thu Jul 19 21:20:56 UTC 2018
Hi Thomas,
On 07/19/2018 09:36 AM, Thomas Schatzl wrote:
> Hi,
>
> On Wed, 2018-07-18 at 19:46 -0300, Gustavo Romero wrote:
>> Hi Community,
>>
>> In the light of the additional information brought by Derek and Swati
>> on this issue (which I summarized below) would it be possible to get
>> a second assessment for this issue regarding its priority? Would it
>> be sound to qualify it as P3 instead of P4? I don't have experience
>> on doing such an assessment so I ask.
>>
>> - If the JVM isn't allowed to use memory on all of the numa nodes,
>> for instance, by numactl, cgroups, and docker container, then a
>> significant fraction of the JVM heap will be unusable, causing early
>> GC;
>>
>> - The waste can be up to (N-1)/N of the young generation in some
>> cases, where N is the total number of nodes available on the system
>> (unpinned); So on an EPYC machine with 8 numa nodes, for instance,
>> waste can be up to 7/8 of total memory available on a given node;
>>
>> - With the patch for this issue applied SPECjbb2015 on a EPYC machine
>> (8 NUMA nodes) shows a significant performance improvement:
>>
>> Case 1: Performance for 1 NUMA node bound, MultiJVM 1 Group Run
>> (numactl --cpunodebind=0 --membind=0)
>> Max-jOPS : +27.59%
>> Critical-jOPS : +260% (As base memory without patch is 1/8 of
>> total available memory, heap size impacts Critical-jOPS)
>>
>> Case 2: Performance for 2 NUMA nodes bound, Composite Run (numactl
>> --cpunodebind=0,7 --membind=0,7)
>> Max-jOPS : +10.35%
>> Critical-jOPS : +9.89%
>>
>> - It affects AARCH64, PPC64, and Intel/AMD.
>>
>
> I believe the priority could be increased given these numbers. I will
> ask other triaging people what they think too.
>
> Not expecting any issues, I will run it through our test infra (that
> does not use non-default node binding) too.
Thank you so much, Thomas.
> I looked at the webrev, some comments:
>
> - the v2 (http://cr.openjdk.java.net/~gromero/8189922/v2/) webrev does
> not apply cleanly any more.
It's rebased:
http://cr.openjdk.java.net/~gromero/8189922/v3/
I also added the following to the commit message, if that helps:
8189922: UseNUMA memory interleaving vs membind
Reviewed-by: gromero, drwhite, dholmes, tschatzl
Contributed-by: Swati Sharma <swatibits14 at gmail.com>
> - but it seems good except for some odd naming (imo): the getters
> returning a bool do not have the "is" prefix followed by an underscore.
> Is that somehow intentional or related to the libnuma naming? (the
> "isnode_..." / "isbound_..." prefixes which I would call "is_node_..."
> and "is_bound_..") I saw that the numa methods introduced in (I think)
> your other patch do the same, so I am not sure. In any case it is no
> real issue, and could be cleaned up later too.
I think I took it from libnuma 'numa_bitmask_isbitset', tho 'is' does
not occur at the beginning. Sure, I can send a separate change to clean
up it once that change lands. Thanks for reviewing it!
> I can sponsor to push the change into whatever JDK it ends up.
OK. I'll stay tuned to see how the new triage goes. Thanks a lot for
sponsoring it as well.
Best regards,
Gustavo
> Thanks,
> Thomas
>
More information about the hotspot-compiler-dev
mailing list