Integrated: 8372150: Parallel: Tighten requirements around heap sizes with NUMA and Large Pages

Joel Sikström jsikstro at openjdk.org
Thu Nov 27 09:42:06 UTC 2025


On Wed, 19 Nov 2025 15:44:57 GMT, Joel Sikström <jsikstro at openjdk.org> wrote:

> Hello,
> 
> Today, Parallel decides to opt out of using Large pages if the heap size, either minimum, initial or maximum, does not cover enough Large pages for all spaces. Additionally, if we don't get enough heap size for at least one OS page per MutableNUMASpace (one per NUMA-node), Parallel decides to run in a NUMA-degraded mode, where it skips allocating memory locally for some NUMA-nodes. Both of these issues are problematic if we want to start the JVM with a default initial heap size that is equal to the minimum heap size (see [JDK-8371986](https://bugs.openjdk.org/browse/JDK-8371986)). To solve this, we should consider making sure that the minimum heap size is always enough to cover precisely one page per space, where the page size may be Large or not.
> 
> For completeness, when user-provided settings for UseNUMA, UseLargePages and heap sizes can't be satisfied at the same time, one must be prioritised over others. Today, we prioritise heap size settings over both UseNUMA and UseLargePages. This change suggest shifting the (primary) priority to UseNUMA and UseLargePages, by bumping MinHeapSize, InitialHeapSize and MaxHeapSize to an adequate number, if not already enough. By bumping the minimum heap size to an adequate number, we are also bumping the lower-limit for the initial heap size and maximum heap size, which must be equal to or greater than the minimum heap size.
> 
> However, a problem with this approach is that if the Large page size is very large (e.g., 512MB or 1GB), the minimum, initial, and maybe the maximum heap size will be bumped to a very large number as well. To mitigate this impact, we look at what Large page size can be used based on the maximum heap size instead. This is because, running the JVM in default configuration, the maximum heap size will almost always be large enough to cover enough Large pages, so we bump the minimum and initial to that value instead. But, if the maximum heap size is not enough, we opt-out of using Large pages, which is consistent with the old behavior.
> 
> Testing:
> * Oracle's tier1-8
> * tier1-4 with the flags `-XX:+UseParallelGC -XX:+UseLargePages -XX:+UseNUMA`

This pull request has now been integrated.

Changeset: 4ac33956
Author:    Joel Sikström <jsikstro at openjdk.org>
URL:       https://git.openjdk.org/jdk/commit/4ac33956343bbfa3619ccb029ceed6c5a402f775
Stats:     201 lines in 11 files changed: 97 ins; 84 del; 20 mod

8372150: Parallel: Tighten requirements around heap sizes with NUMA and Large Pages

Reviewed-by: ayang, stefank, aboldtch, stuefe

-------------

PR: https://git.openjdk.org/jdk/pull/28394


More information about the hotspot-dev mailing list