RFR: 8372150: Parallel: Tighten requirements around heap sizes with NUMA and Large Pages [v5]
Albert Mingkun Yang
ayang at openjdk.org
Wed Nov 26 12:45:58 UTC 2025
On Wed, 26 Nov 2025 11:02:10 GMT, Joel Sikström <jsikstro at openjdk.org> wrote:
>> Hello,
>>
>> Today, Parallel decides to opt out of using Large pages if the heap size, either minimum, initial or maximum, does not cover enough Large pages for all spaces. Additionally, if we don't get enough heap size for at least one OS page per MutableNUMASpace (one per NUMA-node), Parallel decides to run in a NUMA-degraded mode, where it skips allocating memory locally for some NUMA-nodes. Both of these issues are problematic if we want to start the JVM with a default initial heap size that is equal to the minimum heap size (see [JDK-8371986](https://bugs.openjdk.org/browse/JDK-8371986)). To solve this, we should consider making sure that the minimum heap size is always enough to cover precisely one page per space, where the page size may be Large or not.
>>
>> For completeness, when user-provided settings for UseNUMA, UseLargePages and heap sizes can't be satisfied at the same time, one must be prioritised over others. Today, we prioritise heap size settings over both UseNUMA and UseLargePages. This change suggest shifting the (primary) priority to UseNUMA and UseLargePages, by bumping MinHeapSize, InitialHeapSize and MaxHeapSize to an adequate number, if not already enough. By bumping the minimum heap size to an adequate number, we are also bumping the lower-limit for the initial heap size and maximum heap size, which must be equal to or greater than the minimum heap size.
>>
>> However, a problem with this approach is that if the Large page size is very large (e.g., 512MB or 1GB), the minimum, initial, and maybe the maximum heap size will be bumped to a very large number as well. To mitigate this impact, we look at what Large page size can be used based on the maximum heap size instead. This is because, running the JVM in default configuration, the maximum heap size will almost always be large enough to cover enough Large pages, so we bump the minimum and initial to that value instead. But, if the maximum heap size is not enough, we opt-out of using Large pages, which is consistent with the old behavior.
>>
>> Testing:
>> * Oracle's tier1-4
>> * tier1-3 with the flags `-XX:+UseParallelGC -XX:+UseLargePages -XX:+UseNUMA`
>
> Joel Sikström has updated the pull request incrementally with one additional commit since the last revision:
>
> Re-order methods for consistency in class hierarchy
Marked as reviewed by ayang (Reviewer).
-------------
PR Review: https://git.openjdk.org/jdk/pull/28394#pullrequestreview-3510786811
More information about the hotspot-dev
mailing list