RFR: 8372150: Parallel: Tighten requirements around heap sizes with NUMA and Large Pages [v4]

Thomas Stuefe stuefe at openjdk.org
Wed Nov 26 10:20:04 UTC 2025


On Wed, 26 Nov 2025 09:37:48 GMT, Joel Sikström <jsikstro at openjdk.org> wrote:

>> Hello,
>> 
>> Today, Parallel decides to opt out of using Large pages if the heap size, either minimum, initial or maximum, does not cover enough Large pages for all spaces. Additionally, if we don't get enough heap size for at least one OS page per MutableNUMASpace (one per NUMA-node), Parallel decides to run in a NUMA-degraded mode, where it skips allocating memory locally for some NUMA-nodes. Both of these issues are problematic if we want to start the JVM with a default initial heap size that is equal to the minimum heap size (see [JDK-8371986](https://bugs.openjdk.org/browse/JDK-8371986)). To solve this, we should consider making sure that the minimum heap size is always enough to cover precisely one page per space, where the page size may be Large or not.
>> 
>> For completeness, when user-provided settings for UseNUMA, UseLargePages and heap sizes can't be satisfied at the same time, one must be prioritised over others. Today, we prioritise heap size settings over both UseNUMA and UseLargePages. This change suggest shifting the (primary) priority to UseNUMA and UseLargePages, by bumping MinHeapSize, InitialHeapSize and MaxHeapSize to an adequate number, if not already enough. By bumping the minimum heap size to an adequate number, we are also bumping the lower-limit for the initial heap size and maximum heap size, which must be equal to or greater than the minimum heap size.
>> 
>> However, a problem with this approach is that if the Large page size is very large (e.g., 512MB or 1GB), the minimum, initial, and maybe the maximum heap size will be bumped to a very large number as well. To mitigate this impact, we look at what Large page size can be used based on the maximum heap size instead. This is because, running the JVM in default configuration, the maximum heap size will almost always be large enough to cover enough Large pages, so we bump the minimum and initial to that value instead. But, if the maximum heap size is not enough, we opt-out of using Large pages, which is consistent with the old behavior.
>> 
>> Testing:
>> * Oracle's tier1-4
>> * tier1-3 with the flags `-XX:+UseParallelGC -XX:+UseLargePages -XX:+UseNUMA`
>
> Joel Sikström has updated the pull request with a new target base due to a merge or a rebase. The pull request now contains six commits:
> 
>  - Merge branch 'master' into JDK-8372150_parallel_minheapsize_numa_largepages
>  - Choose large page size based on MaxHeapSize
>  - Revert "8372150: Parallel: Tighten requirements around MinHeapSize with NUMA and Large Pages"
>    
>    This reverts commit c02e08ade597193d70d1eb21036845bdd0304d51.
>  - Revert "Albert review feedback"
>    
>    This reverts commit 66928d22112c1ac516e4b654c28249fdedf0dba9.
>  - Albert review feedback
>  - 8372150: Parallel: Tighten requirements around MinHeapSize with NUMA and Large Pages

This looks good to me (not that you need another review). A jtreg regression test would be nice, possibly in a separate RFE.

What do other GCs do if heap is smaller than smallest large page size?

The super-large page size issue one could also consider a user error that should result in a VM exit.

-------------

Marked as reviewed by stuefe (Reviewer).

PR Review: https://git.openjdk.org/jdk/pull/28394#pullrequestreview-3510213568


More information about the hotspot-dev mailing list