RFR: 8319795: Static huge pages are not used for CodeCache

Evgeny Astigeevich eastigeevich at openjdk.org
Thu Nov 16 14:21:33 UTC 2023


On Fri, 10 Nov 2023 08:05:38 GMT, Thomas Stuefe <stuefe at openjdk.org> wrote:

>> This is a fix of a regression introduced by [JDK-8261894](https://bugs.openjdk.org/browse/JDK-8261894).
>> After JDK-8261894, `os::can_execute_large_page_memory()` returns `true` only if `UseTransparentHugePages` is true. As CodeCache uses `os::can_execute_large_page_memory()` when it detects a page size, CodeCache cannot use static huge pages (`UseTransparentHugePages` is `false`) anymore after the change.
>> Before JDK-8261894, `os::can_execute_large_page_memory()` returned `true`  when either `UseTransparentHugePages` or `UseHugeTLBFS` was true.
>> 
>> After JDK-8261894, `XX:+UseLargePages XX:-UseTransparentHugePages` means to use static huge pages: aka `UseHugeTLBFS` is `true`. If `UseLargePages` is not set to `true` via the option, it will be set to `true` if `UseTransparentHugePages` is `true`.
>> 
>> `os::can_execute_large_page_memory()` is modified to return `UseLargePages`. A regression gtest  is added.
>> 
>> Tested fastdebug and release builds:
>> - [x] tier1
>> - [x] gtest
>> - [x] test/hotspot/jtreg/gtest/LargePageGtests.java
>
>> Hm... I looked into `CodeCache::page_size`, its history and uses. The function smells. It has an interesting hack for the large page case:
>> 
>> ```
>>   if (os::can_execute_large_page_memory()) {
>>     if (InitialCodeCacheSize < ReservedCodeCacheSize) {
>>       // Make sure that the page size allows for an incremental commit of the reserved space
>>       min_pages = MAX2(min_pages, (size_t)8);
>>     }
>> ```
>> 
>> The uses are:
>> 
>>     * 2 of `page_size(false, 8)`
>> 
>>     * 1 of `page_size(false, 1)`
>> 
>>     * 1 of `page_size(true, 1)`
>> 
>> 
>> This looks strange to me. I need to if everything is correct.
> 
> I'm not the original author, but my guess is this tries to prevent that if the large page size is very large, e.g. 1G, it should use smaller page sizes, e.g. 2M or 4K, because otherwise a CodeCacheSize of 1G would allocate 1x1GB page, and that would be fully committed right from the start and increase memory footprint.
> 
> Seems like an odd optimization though. It feels a bit arbitrary since we don't do this for the java heap, for example. Also, it prevents users from ever using e.g. 1G pages for code cache if they really want that.
> 
> @TobiHartmann ?
> 
> As for the `if (os::can_execute_large_page_memory()) {`, I believe that was slightly wrong. It feels like it wants to guard the large-page case, but on Windows "can_execute_large_page_memory" is always true, regardless of UseLargePages. And now, on Linux, it is also always true.
> 
> I would just swap that with `if (UseLargePages)`.

@tstuefe, @TobiHartmann,
`InitialCodeCacheSize` is useless when static huge pages are used. When static huge pages are reserved, they are committed: https://github.com/openjdk/jdk/blob/master/src/hotspot/share/memory/virtualspace.cpp#L248

  if (use_explicit_large_pages(page_size)) {
    // System can't commit large pages i.e. use transparent huge pages and
    // the caller requested large pages. To satisfy this request we use
    // explicit large pages and these have to be committed up front to ensure
    // no reservations are lost.
    do {

-------------

PR Comment: https://git.openjdk.org/jdk/pull/16582#issuecomment-1814521305


More information about the hotspot-runtime-dev mailing list