RFR: 8319795: Static huge pages are not used for CodeCache
Thomas Stuefe
stuefe at openjdk.org
Fri Nov 10 08:07:58 UTC 2023
On Thu, 9 Nov 2023 23:46:29 GMT, Evgeny Astigeevich <eastigeevich at openjdk.org> wrote:
> Hm... I looked into `CodeCache::page_size`, its history and uses. The function smells. It has an interesting hack for the large page case:
>
> ```
> if (os::can_execute_large_page_memory()) {
> if (InitialCodeCacheSize < ReservedCodeCacheSize) {
> // Make sure that the page size allows for an incremental commit of the reserved space
> min_pages = MAX2(min_pages, (size_t)8);
> }
> ```
>
> The uses are:
>
> * 2 of `page_size(false, 8)`
>
> * 1 of `page_size(false, 1)`
>
> * 1 of `page_size(true, 1)`
>
>
> This looks strange to me. I need to if everything is correct.
I'm not the original author, but my guess is this tries to prevent that if the large page size is very large, e.g. 1G, it should use smaller page sizes, e.g. 2M or 4K, because otherwise a CodeCacheSize of 1G would allocate 1x1GB page, and that would be fully committed right from the start and increase memory footprint.
Seems like an odd optimization though. It feels a bit arbitrary since we don't do this for the java heap, for example. Also, it prevents users from ever using e.g. 1G pages for code cache if they really want that.
@TobiHartmann ?
As for the `if (os::can_execute_large_page_memory()) {`, I believe that was slightly wrong. It feels like it wants to guard the large-page case, but on Windows "can_execute_large_page_memory" is always true, regardless of UseLargePages. And now, on Linux, it is also always true.
I would just swap that with `if (UseLargePages)`.
-------------
PR Comment: https://git.openjdk.org/jdk/pull/16582#issuecomment-1805266956
More information about the hotspot-runtime-dev
mailing list