[jdk16] RFR: 8259380: Correct pretouch chunk size to cap with actual page size

Patrick Zhang qpzhang at openjdk.java.net
Sat Jan 9 04:18:59 UTC 2021


On Fri, 8 Jan 2021 16:08:43 GMT, Thomas Schatzl <tschatzl at openjdk.org> wrote:

>> Thanks for moving this issue to JDK16.
>> 
>> I looked a bit into what could cause this, and one thing that I particularly noticed is that the tests are enabling THP.
>> 
>> With THP, the (original) code sets updages the page size to os::vm_page_size():
>> 
>> #ifdef LINUX
>>   // When using THP we need to always pre-touch using small pages as the OS will
>>   // initially always use small pages.
>>   page_size = UseTransparentHugePages ? (size_t)os::vm_page_size() : page_size;
>> #endif
>>    size_t chunk_size = MAX2(PretouchTask::chunk_size(), page_size);
>> After having looked at the code, I am not completely sure whether the analysis about the issue is correct or what the change fixes. To me it looks like that on aarch64 the default chunk size should be much higher than on x64.
>> 
>> Example:
>> `page_size` is the size of a page, that 512M in your case; `os::vm_page_size()` is the small size page, 64k in that configuration.
>> 
>> `chunk_size` is then set to 4M (MAX(PreTouchParallelChunkSize, 64k)) - because with THP, as the comment indicates, we do not know whether the reservation is a large or a small page - so the code must use the small page size for actual pretouch within a chunk.
>> 
>> I am also not sure about the statement about the introduction of this issue in JDK-8254972: the only difference seems to be where the page size for the `PretouchTask` is initialized, in the `PretouchTask` constructor there, and the calculation of the chunk size in the `PretouchTask::work` method done by every thread seperately.
>> 
>> The only thing I could see that in case the OS already gave us large pages (i.e. 512M), and iterating over the same page using multiple threads may cause performance issues, although for the startup case, x64 does not seem to care (for me, for 20g heaps) and the default of 4M seems to be fastest as shown in [https://bugs.openjdk.java.net/browse/JDK-8254699][JDK-8254699] (and afaik with THP you always get small pages at first).
>> 
>> I can't see how setting chunk size to 4k using the shows "the same problem" on x64 as it does not show with 4M (default) chunk size and 1g (huge) pages. E.g. chunk size = 4M
>> 
>> $ time java -Xmx20g -Xms20g -XX:+UseLargePages -XX:LargePageSizeInBytes=1g -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:PreTouchParallelChunkSize=4m Hello 
>> [0.001s][warning][gc] LargePageSizeInBytes=1073741824 large_page_size 1073741824
>> [0.053s][warning][gc] pretouch 21474836480 chunk 4194304 page 4096
>> [0.406s][warning][gc] pretouch 335544320 chunk 4194304 page 4096
>> [0.413s][warning][gc] pretouch 335544320 chunk 4194304 page 4096
>> [0.421s][warning][gc] pretouch 41943040 chunk 4194304 page 4096
>> [0.423s][warning][gc] pretouch 41943040 chunk 4194304 page 4096
>> [0.432s][warning][gc] pretouch 41943040 chunk 4194304 page 4096
>> Hello World!
>> 
>> real	0m0.708s
>> user	0m0.367s
>> sys	0m9.983s
>> 
>> and chunk size = 1g:
>> 
>> $ time java -Xmx20g -Xms20g -XX:+UseLargePages -XX:LargePageSizeInBytes=1g -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:PreTouchParallelChunkSize=1g Hello 
>> [0.001s][warning][gc] LargePageSizeInBytes=1073741824 large_page_size 1073741824
>> [0.054s][warning][gc] pretouch 21474836480 chunk 1073741824 page 4096
>> [1.141s][warning][gc] pretouch 335544320 chunk 1073741824 page 4096
>> [1.216s][warning][gc] pretouch 335544320 chunk 1073741824 page 4096
>> [1.289s][warning][gc] pretouch 41943040 chunk 1073741824 page 4096
>> [1.299s][warning][gc] pretouch 41943040 chunk 1073741824 page 4096
>> [1.320s][warning][gc] pretouch 41943040 chunk 1073741824 page 4096
>> Hello World!
>> 
>> real	0m1.613s
>> user	0m0.420s
>> sys	0m16.666s
>> 
>> Even without THP using 4M chunks (and still using 1g pages for the Java heap) still seems to be consistently faster.
>> 
>> I would suggest that in this case the correct fix would be to do the same testing as done in JDK-8254699 and add an aarch64 specific default for `-XX:PreTouchParallelChunkSize`.
>> 
>> The suggested change (to increase chunk size based on page size, particularly with THP enabled) seems to not fix the issue (suboptimal default chunk size) and also regress performance on x64 which I would prefer to avoid.
>> 
>> (There is still the issue whether it makes sense to have a smaller chunk size than page size *without* THP, but that is not the issue here afaict)
>
> Another option is to just set the default chunk size for aarch64 to e.g. 512M and defer searching for the "best" later.

Thanks for the comments.

First of all, I am not objecting to https://github.com/openjdk/jdk16/commit/805d05812c5e831947197419d163f9c83d55634a, which does helps most cases. If we have an aarch64 system with 2MB large page configured for the kernel, certainly we can share the benefit as well. 

> I am also not sure about the statement about the introduction of this issue in JDK-8254972: the only difference seems to be where the page size for the `PretouchTask` is initialized, in the `PretouchTask` constructor there, and the calculation of the chunk size in the `PretouchTask::work` method done by every thread seperately.

Before https://github.com/openjdk/jdk16/commit/2c7fc85be92c60f4262aff3bc80e704792c1e810, `PretouchTask` instance gets initialized firstly, then doing the `cap with page size` when calculating `num_chunks`. In contrast, after  https://github.com/openjdk/jdk16/commit/2c7fc85be92c60f4262aff3bc80e704792c1e810, `PretouchTask` instance initialization followed the calculation of `chunk_size`. This is the diff.

> The only thing I could see that in case the OS already gave us large pages (i.e. 512M), and iterating over the same page using multiple threads may cause performance issues, although for the startup case, x64 does not seem to care (for me, for 20g heaps) and the default of 4M seems to be fastest as shown in [https://bugs.openjdk.java.net/browse/JDK-8254699][JDK-8254699] (and afaik with THP you always get small pages at first).

Please see https://github.com/torvalds/linux/blob/a09b1d78505eb9fe27597a5174c61a7c66253fe8/Documentation/admin-guide/mm/hugetlbpage.rst.
We cannot take assumption of the size of large pages, this is not specific to any arch, x64, aarch64, or else. Users are able to configure any choice to kernel they want, if architecturally supported. So x64 can face to 512MB large page, while aarch64 can work with 2MB large page too.

> I can't see how setting chunk size to 4k using the shows "the same problem" on x64 as it does not show with 4M (default) chunk size and 1g (huge) pages. E.g. chunk size = 4M

Please see the testing results I attached, https://bugs.openjdk.java.net/secure/attachment/92623/pretouch_chunk_size_fix_testing.txt
2), 3), 4) are done on x86 servers with various -XX:PreTouchParallelChunkSize=xxk

> Even without THP using 4M chunks (and still using 1g pages for the Java heap) still seems to be consistently faster.

Again, I agree it is faster under some conditions, but not all.

> I would suggest that in this case the correct fix would be to do the same testing as done in JDK-8254699 and add an aarch64 specific default for `-XX:PreTouchParallelChunkSize`.

Not agree, it hurts startup time on most systems configured by default, e.g., CentOS 8 Stream aarch64.

> The suggested change (to increase chunk size based on page size, particularly with THP enabled) seems to not fix the issue (suboptimal default chunk size) and also regress performance on x64 which I would prefer to avoid.
 
No, it does not hurt default system on x64, since the size of large pages there is 2M, which means 4M can still work very well.

> (There is still the issue whether it makes sense to have a smaller chunk size than page size _without_ THP, but that is not the issue here afaict)

I assume this change does not change things if not LINUX, or not THP. Please double check.

-------------

PR: https://git.openjdk.java.net/jdk16/pull/97



More information about the hotspot-gc-dev mailing list