RFR: JDK-8312182: THPs cause huge RSS due to thread start timing issue

Thomas Stuefe stuefe at openjdk.org
Wed Jul 19 07:41:59 UTC 2023


On Wed, 19 Jul 2023 04:21:26 GMT, David Holmes <dholmes at openjdk.org> wrote:

>> If Transparent Huge Pages are unconditionally enabled (`/sys/kernel/mm/transparent_hugepage/enabled` contains `[always]`), Java applications that use many threads may see a huge Resident Set Size. That RSS is caused by thread stacks being mostly paged in. This page-in is caused by thread stack memory being transformed into huge pages by `khugepaged`; later, those huge pages usually shatter into small pages when Java guard pages are established at thread start, but the remaining splinter small pages remain paged in.
>> 
>> [JDK-8303215](https://bugs.openjdk.org/browse/JDK-8303215) attempted to fix this problem by making it unlikely that thread stack boundaries are aligned to THP page size. Unfortunately, that was not sufficient. We still see JVMs with huge footprints, especially if they did create many Java threads in rapid succession.
>> 
>> Note that this effect is independent of any JVM switches; in particular, it happens regardless of `-XX:+UseTransparentHugePages` or `-XX:+UseLargePages`.
>> 
>> Update: tests show that the interference of `khugepaged` also costs performance when starting threads, and this patch addresses both footprint and performance problems.
>> 
>> ##### Demonstration:
>> 
>> Linux 5.15 on x64, glibc 2.31: 10000 idle threads with 100 MB pre-touched java heap, `-Xss2M`, on x64, will consume:
>> 
>> A) Baseline (THP disabled on system):  *369 MB*
>> B) THP="always", JDK-8303215 present: *1.5 GB .. >2 GB* (very wobbly)
>> C) THP="always", JDK-8303215 present, artificial delay after thread start: **20,6 GB** (!).
>> 
>> 
>> #### Cause:
>> 
>> The problem is caused by timing. When we create multiple Java threads, the following sequence of actions happens:
>> 
>> In the parent thread:
>> - the parent thread calls `pthread_create(3)`
>> - `pthread_create(3)` creates the thread stack by calling `mmap(2)`
>> - `pthread_create(3)` calls `clone(2)` to start the child thread
>> - repeat to start more threads
>> 
>> Each child thread:
>> - queries its stack dimensions
>> - handshakes with the parent to signal lifeness
>> - establishes guard pages at the low end of the stack
>> 
>> The thread stack mapping is established in the parent thread; the guard pages are placed by the child threads. There is a time window in which the thread stack is already mapped into address space, but guard pages still need to be placed.
>> 
>> If the parent is faster than the children, it will have created mappings faster than the children can place guard pages on them.
>> 
>> For the kernel, these t...
>
> src/hotspot/os/linux/os_linux.cpp line 934:
> 
>> 932:     guard_size = MAX2(guard_size, os::vm_page_size());
>> 933:     // Add an additional page to the stack size to reduce its chances of getting huge page aligned
>> 934:     // so that the stack does not get backed by a transparent huge page.
> 
> I don't think these two adjustments should be combined like this. The existing +1 may be sufficient for some folk who do not want the cost of creating a glibc guard page as well.

I disagree, see earlier remarks. "May be sufficient" is vague - either you want to prevent THPs forming in thread stacks, or you are okay with it. There is no point in doing one without the other.

If you want to prevent them, you have to do both since the effects laid out in this PR are not predictable. Even if your app does not create threads - and why would you care for that one guard page then - the JVM creates a bunch of threads in quick succession too and thus suffers from the same effects.

And if you want to have THPs, then you should disable both mitigations together.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/14919#discussion_r1267667412


More information about the hotspot-runtime-dev mailing list