RFR(XL): 8220310: Implementation: NUMA-Aware Memory Allocation for G1, Mutator (1/3)
Kim Barrett
kim.barrett at oracle.com
Mon Oct 14 21:03:58 UTC 2019
> On Oct 14, 2019, at 11:29 AM, Stefan Johansson <stefan.johansson at oracle.com> wrote:
>
> Hi Sangheon (and Kim),
>
> On 2019-10-11 19:34, sangheon.kim at oracle.com wrote:
>>> ------------------------------------------------------------------------------
>>> src/hotspot/share/gc/g1/g1RegionToSpaceMapper.cpp
>>> 82 _storage.request_memory_on_node(page, _pages_per_region, node_index);
>>> ...
>>> 153 _storage.request_memory_on_node(idx, 1, node_index);
>>>
>>> I'm not sure request_memory_on_node belongs on the _storage object.
>>> The current implementation just has the storage object (conditionally)
>>> forward the request to the memory node manager object. These places in
>>> the space mapper could just make the calls on the memory node manager
>>> object directly (it is already being used nearby). And these places
>>> don't need the conditionalization.
>>>
>>> I think making the space mapper directly call the memory node manager
>>> here would remove the need for the proposed changes to the virtual
>>> space class.
>> Fixed to directly call G1NUMA::request_memory_on_node() (previously G1MemoryNodeManager).
>> But G1NUMA can't calculate raw address, so I had to add base address at G1NUMA to get that.
>> When I implemented it, I had similar opinion (not good fit for _storage) but I also wanted to avoid adding extra dependency at G1NUMA. But anyway I realized we can achieve it easily if we have base address.
>
> I don't fully I agree here. I think having the storage do the call to G1NUMA does make sense because it knows how to translate a page index to a real address. It also goes along the same lines as the pretouch() call in commit_regions(), but I won't object if we want to leave it in the mapper.
>
> If we do that, there are still some changes required, because we currently will call G1NUMA::request_memory_on_node() for all mappers and all mappers will then use the heaps base address when calling numa_make_local(). So I propose two changes:
> 1. Expose G1PageBasedVirtualSpace::page_start() or use G1CollectedHeap::bottom_addr_for_region(uint index) and let the mapper use it to call request_memory_on_node() with a real address rather than a page index. Another solution could be to change the function even more and call it request_heap_region_on_node() and just pass in the region index and then use G1CollectedHeap::bottom_addr_for_region(uint index) in G1NUMA.
I overlooked part of how my suggestion was handled.
Yeah, I don't think I like having the base address added to G1NUMA. I
like Stefan's change #1 (specifically, adding G1PBVS::page_start()).
I also missed that there seems to be a units mismatch in the call to
request_memory_on_node in G1RegionsSmallerThanCommitSizeMapper. It's
passing a region index rather than a page index (before above change)
or address (after above change).
> 2. Add a state to the mappers to say if they are NUMA aware or not, and currently only the heap mapper should be NUMA aware. We could either set this state to true using the mtJavaHeap type as we have checked before or add an explicit setter that we only call for the heap mapper.
>
> I know that only doing 2) will fix the current problem, but I think it would be nice to avoid having the base address in G1NUMA, thoughts?
I don't understand the point about mappers needing to know if they are
NUMA or not. request_memory_on_node is only called by the two relevant
region->space mappers, with the memory involved always in the Java
heap (after fixing the units mismatch mentioned above). That is,
G1NUMA::request_memory_on_node should only be called for Java heap
memory. (It might be able to assert is_in_reserved or something like
that, though initialization order might prevent that.)
More information about the hotspot-gc-dev
mailing list