RFR: 8372150: Parallel: Tighten requirements around heap sizes with NUMA and Large Pages [v5]
Joel Sikström
jsikstro at openjdk.org
Thu Nov 27 09:42:04 UTC 2025
On Wed, 26 Nov 2025 11:02:10 GMT, Joel Sikström <jsikstro at openjdk.org> wrote:
>> Hello,
>>
>> Today, Parallel decides to opt out of using Large pages if the heap size, either minimum, initial or maximum, does not cover enough Large pages for all spaces. Additionally, if we don't get enough heap size for at least one OS page per MutableNUMASpace (one per NUMA-node), Parallel decides to run in a NUMA-degraded mode, where it skips allocating memory locally for some NUMA-nodes. Both of these issues are problematic if we want to start the JVM with a default initial heap size that is equal to the minimum heap size (see [JDK-8371986](https://bugs.openjdk.org/browse/JDK-8371986)). To solve this, we should consider making sure that the minimum heap size is always enough to cover precisely one page per space, where the page size may be Large or not.
>>
>> For completeness, when user-provided settings for UseNUMA, UseLargePages and heap sizes can't be satisfied at the same time, one must be prioritised over others. Today, we prioritise heap size settings over both UseNUMA and UseLargePages. This change suggest shifting the (primary) priority to UseNUMA and UseLargePages, by bumping MinHeapSize, InitialHeapSize and MaxHeapSize to an adequate number, if not already enough. By bumping the minimum heap size to an adequate number, we are also bumping the lower-limit for the initial heap size and maximum heap size, which must be equal to or greater than the minimum heap size.
>>
>> However, a problem with this approach is that if the Large page size is very large (e.g., 512MB or 1GB), the minimum, initial, and maybe the maximum heap size will be bumped to a very large number as well. To mitigate this impact, we look at what Large page size can be used based on the maximum heap size instead. This is because, running the JVM in default configuration, the maximum heap size will almost always be large enough to cover enough Large pages, so we bump the minimum and initial to that value instead. But, if the maximum heap size is not enough, we opt-out of using Large pages, which is consistent with the old behavior.
>>
>> Testing:
>> * Oracle's tier1-8
>> * tier1-4 with the flags `-XX:+UseParallelGC -XX:+UseLargePages -XX:+UseNUMA`
>
> Joel Sikström has updated the pull request incrementally with one additional commit since the last revision:
>
> Re-order methods for consistency in class hierarchy
Thank you for the reviews everyone! I've re-run the testing listed in PR description and done some local testing with different large page sizes, and it looks good.
Some concerns for the future is that tests _might_ behave strangely or perhaps fail when run with Parallel, UseLargePages and large page sizes >2MB. We'll keep this in mind moving forward, either adjusting the tests to the new behavior or excluding tests when run with large pages.
The failing test in GHA is due to an unrelated failure (see [JDK-8372585](https://bugs.openjdk.org/browse/JDK-8372585)).
-------------
PR Comment: https://git.openjdk.org/jdk/pull/28394#issuecomment-3584949256
More information about the hotspot-dev
mailing list