RFR: JDK-8256155: os::Linux Populate all large_page_sizes, select smallest page size in reserve_memory_special_huge_tlbfs* [v15]

Stefan Johansson sjohanss at openjdk.java.net
Mon Jan 18 10:45:48 UTC 2021


On Sat, 16 Jan 2021 05:58:56 GMT, Thomas Stuefe <stuefe at openjdk.org> wrote:

>> Did some more testing with the code. I'm using Parallel for testing becuase G1 does a better job aligning sizes and avoiding some problems.
>> 
>> I found that this change has a problem with mapping using both small and large pages (`reserve_memory_special_huge_tlbfs_mixed()`). I'm currently investigating if we can remove these type of mixed-mappings, and instead make sure we only use large pages when properly aligned, so in the future we might be able get rid of some code in this area. For know see my comments below.
>
> Since we are not shipping this with JDK16, I'm more relaxed now. This will have time to cook before JDK17 is shipped, which takes care of my third point (doing more tests).
> 
> About the jtreg test. I originally wrote:
> 
>>> one jtreg test to test that the VM comes up with -XX:+UseLargePages -XX:LargePageSizeInBytes=1G and allocates small-large-pages as expected. This is not only needed as a function proof but to prevent regressions when we reform the code (which will happen)
> 
> Not sure if that was too vague. An easy way would be to add some tracing to the VM in the allocation path, eg with `log_info(os)(...)`, then in the test start a VM with `-XX:+UseLargePages -XX:LargePageSizeInBytes=1G -Xlog=os`, and scan its output. There are many tests which do this, for an easy example see e.g. runtime/os/TestUseCpuAllocPath.java.
> 
> I'll take a closer look next week but will wait until Stefan had his go.

Found a couple of additional issues:
* The `page_size_for_region_*()` helpers was previously only used in higher level code to help figure out if large pages should/could be used for a given size. Now when using them at the actual site of reservation it will break the cases where someone in a higher level has requested that there should be at least a certain number of pages for the given size. We can take the heap using Parallel as an example:
  const size_t min_pages = 4; // 1 for eden + 1 for each survivor + 1 for old
  const size_t page_sz = os::page_size_for_region_aligned(MinHeapSize, min_pages);
If both 2M and 1G pages are enabled this will settle for 2M in the code setting up Parallel GC but then end up allocating just one 1G page if we run with `-Xmx1g`.

* There is also an issue when there, for example, are too few pages to allocate the heap using 1G pages, then we fall straight back to 4k pages instead of trying 2M pages first. 

My preferred way of handling this would be that the higher level code sets an upper bound on the page size to be used and the mapping layer satisfies the mapping using the largest possible page size with enough pages free. Such a change might be a bit big for this PR, but we need to make sure this change don't break anything like what I describe above.

-------------

PR: https://git.openjdk.java.net/jdk/pull/1153



More information about the hotspot-gc-dev mailing list