[jdk16] RFR: 8259380: Correct pretouch chunk size to cap with actual page size

Thomas Schatzl tschatzl at openjdk.java.net
Fri Jan 8 15:49:05 UTC 2021


On Fri, 8 Jan 2021 13:41:06 GMT, Patrick Zhang <qpzhang at openjdk.org> wrote:

> This is actually a regression, with regards to JVM startup time extreme slowdown, initially found at an aarch64 platform (Ampere Altra core).
> 
> The chunk size of pretouching should cap with the input page size which probably stands for large pages size if UseLargePages was set, otherwise processing chunks with much smaller size inside large size pages would hurt performance.
> 
> This issue was introduced during a refactor on chunk calculations JDK-8254972 (2c7fc85) but did not cause any problem immediately since the default PreTouchParallelChunkSize for all platforms are 1GB which can cover all popular sizes of large pages in use by most kernel variations. Later on, JDK-8254699 (805d058) set default 4MB for Linux platform, which is helpful to speed up startup time for some platforms. For example, most x64, since the popular default large page size (e.g. CentOS) is 2MB. In contrast, most default large page size with aarch64 platforms/kernels (e.g. CentOS) are 512MB, so using the 4MB chunk size to do page walk through the pages inside 512MB large page hurt performance of startup time.
> 
> In addition, there will be a similar problem if we set -XX:PreTouchParallelChunkSize=4k at a x64 Linux platform, the startup slowdown will show as well.
> 
> Tests:
> https://bugs.openjdk.java.net/secure/attachment/92623/pretouch_chunk_size_fix_testing.txt
> The 4 before-after comparisons show the JVM startup time go back to normal.
> 1). 33.381s to 0.870s
> 2). 20.333s to 2.740s
> 3). 15.090s to 6.268s
> 4). 38.983s to 6.709s
> (Use the start time of pretouching the first Survivor space as a rough measurement, while \time, or GCTraceTime can generate similar results)

Thanks for moving this issue to JDK16.

I looked a bit into what could cause this, and one thing that I particularly noticed is that the tests are enabling THP.

With THP, the (original) code sets updages the page size to os::vm_page_size():

#ifdef LINUX
  // When using THP we need to always pre-touch using small pages as the OS will
  // initially always use small pages.
  page_size = UseTransparentHugePages ? (size_t)os::vm_page_size() : page_size;
#endif
   size_t chunk_size = MAX2(PretouchTask::chunk_size(), page_size);
After having looked at the code, I am not completely sure whether the analysis about the issue is correct or what the change fixes. To me it looks like that on aarch64 the default chunk size should be much higher than on x64.

Example:
`page_size` is the size of a page, that 512M in your case; `os::vm_page_size()` is the small size page, 64k in that configuration.

`chunk_size` is then set to 4M (MAX(PreTouchParallelChunkSize, 64k)) - because with THP, as the comment indicates, we do not know whether the reservation is a large or a small page - so the code must use the small page size for actual pretouch within a chunk.

I am also not sure about the statement about the introduction of this issue in JDK-8254972: the only difference seems to be where the page size for the `PretouchTask` is initialized, in the `PretouchTask` constructor there, and the calculation of the chunk size in the `PretouchTask::work` method done by every thread seperately.

The only thing I could see that in case the OS already gave us large pages (i.e. 512M), and iterating over the same page using multiple threads may cause performance issues, although for the startup case, x64 does not seem to care (for me, for 20g heaps) and the default of 4M seems to be fastest as shown in [https://bugs.openjdk.java.net/browse/JDK-8254699][JDK-8254699] (and afaik with THP you always get small pages at first).

I can't see how setting chunk size to 4k using the shows "the same problem" on x64 as it does not show with 4M (default) chunk size and 1g (huge) pages. E.g. chunk size = 4M

$ time java -Xmx20g -Xms20g -XX:+UseLargePages -XX:LargePageSizeInBytes=1g -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:PreTouchParallelChunkSize=4m Hello 
[0.001s][warning][gc] LargePageSizeInBytes=1073741824 large_page_size 1073741824
[0.053s][warning][gc] pretouch 21474836480 chunk 4194304 page 4096
[0.406s][warning][gc] pretouch 335544320 chunk 4194304 page 4096
[0.413s][warning][gc] pretouch 335544320 chunk 4194304 page 4096
[0.421s][warning][gc] pretouch 41943040 chunk 4194304 page 4096
[0.423s][warning][gc] pretouch 41943040 chunk 4194304 page 4096
[0.432s][warning][gc] pretouch 41943040 chunk 4194304 page 4096
Hello World!

real	0m0.708s
user	0m0.367s
sys	0m9.983s

and chunk size = 1g:

$ time java -Xmx20g -Xms20g -XX:+UseLargePages -XX:LargePageSizeInBytes=1g -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:PreTouchParallelChunkSize=1g Hello 
[0.001s][warning][gc] LargePageSizeInBytes=1073741824 large_page_size 1073741824
[0.054s][warning][gc] pretouch 21474836480 chunk 1073741824 page 4096
[1.141s][warning][gc] pretouch 335544320 chunk 1073741824 page 4096
[1.216s][warning][gc] pretouch 335544320 chunk 1073741824 page 4096
[1.289s][warning][gc] pretouch 41943040 chunk 1073741824 page 4096
[1.299s][warning][gc] pretouch 41943040 chunk 1073741824 page 4096
[1.320s][warning][gc] pretouch 41943040 chunk 1073741824 page 4096
Hello World!

real	0m1.613s
user	0m0.420s
sys	0m16.666s

Even without THP using 4M chunks (and still using 1g pages for the Java heap) still seems to be consistently faster.

I would suggest that in this case the correct fix would be to do the same testing as done in JDK-8254699 and add an aarch64 specific default for `-XX:PreTouchParallelChunkSize`.

The suggested change (to increase chunk size based on page size, particularly with THP enabled) seems to not fix the issue (suboptimal default chunk size) and also regress performance on x64 which I would prefer to avoid.

(There is still the issue whether it makes sense to have a smaller chunk size than page size *without* THP, but that is not the issue here afaict)

-------------

PR: https://git.openjdk.java.net/jdk16/pull/97



More information about the hotspot-gc-dev mailing list