RFR: 8256155: 2M large pages for code when LargePageSizeInBytes is set to 1G for heap [v2]

Stefan Johansson sjohanss at openjdk.java.net
Thu Nov 19 08:23:03 UTC 2020


On Wed, 18 Nov 2020 19:22:06 GMT, Marcus G K Williams <github.com+168222+mgkwill at openjdk.org> wrote:

> Hi Stefan,
> 
> Thanks so much for your review.
> 
> > Hi and welcome :)
> > I haven't started reviewing the code in detail but a first quick glance raised a couple of questions/comments:
> > 
> > * Why do we have a special case for `exec` when selecting a large page size?
> 
> To my knowledge 2M is the smallest large pages size supported by Linux at the moment. Hardcoding 2M pages was an attempt to simplify the reservation of code memory using LargePages. In most cases currently code memory is reserved in default page size of the system when using 1G LargePages because it does not require 1G or larger reservations. In modern Linux variants default page size seems to be 4k on x86_64. In other architectures it could be up to 64k. The purpose of the patch is to enable the use of smaller LargePages for reservations less than 1G when LargePages are enabled and 1G is set as LargePageSizeInBytes, so as not to fall back to 4k-64k pages for these reservations.
> 
> Perhaps I should just select the page size <= bytes requested and remove 'exec' special case.
> 
Yes, I see no reason to keep that special case and we want to keep this code as general as possible. Looking at the code in `os::Linux::find_default_large_page_size()` it looks like S390 supports 1M large pages, so we cannot assume 2M. I suggest using a technique similar to the one used in `os::Linux::find_large_page_size` to find supported page sizes. If you scan `/sys/kernel/mm/hugepages` and populate  `_page_sizes` using the information found we know we only get supported sizes. 

> > * If we need the special case, `exec_large_page_size()` should not be hard code to return 2m but rather use `os::default_large_page_size()`.
> 
> os::default_large_page_size() will not necessarily be small enough for code memory reservations if the os::default_large_page_size() = 1G, in those cases we would get 4k on most linux x86_64 variants. My attempt is to ensure the smallest large_page_size availabe is used for code memory reservations. Perhaps my 2M hardcoding was a mistake and I should discover this size and select it based on the bytes being reserved.

You are correct that the default size might indeed be 1G, so using something like I suggest above to figure  out the available page sizes and then using an appropriate one given the size of the mapping sounds good.

Please also avoid force pushing changes to open PRs since it makes it harder to follow what changes between updates. It is fine for a PR to contain multiple commits and if you need to update with things from the main branch you should merge rather than rebase. 

Cheers,
Stefan

-------------

PR: https://git.openjdk.java.net/jdk/pull/1153



More information about the hotspot-gc-dev mailing list