RFR: JDK-8312182: THPs cause huge RSS due to thread start timing issue

Poonam Bajaj poonam at openjdk.org
Wed Jul 19 15:36:19 UTC 2023


On Tue, 18 Jul 2023 08:14:56 GMT, Thomas Stuefe <stuefe at openjdk.org> wrote:

> If Transparent Huge Pages are unconditionally enabled (`/sys/kernel/mm/transparent_hugepage/enabled` contains `[always]`), Java applications that use many threads may see a huge Resident Set Size. That RSS is caused by thread stacks being mostly paged in. This page-in is caused by thread stack memory being transformed into huge pages by `khugepaged`; later, those huge pages usually shatter into small pages when Java guard pages are established at thread start, but the remaining splinter small pages remain paged in.
> 
> [JDK-8303215](https://bugs.openjdk.org/browse/JDK-8303215) attempted to fix this problem by making it unlikely that thread stack boundaries are aligned to THP page size. Unfortunately, that was not sufficient. We still see JVMs with huge footprints, especially if they did create many Java threads in rapid succession.
> 
> Note that this effect is independent of any JVM switches; in particular, it happens regardless of `-XX:+UseTransparentHugePages` or `-XX:+UseLargePages`.
> 
> Update: tests show that the interference of `khugepaged` also costs performance when starting threads, and this patch addresses both footprint and performance problems.
> 
> ##### Demonstration:
> 
> Linux 5.15 on x64, glibc 2.31: 10000 idle threads with 100 MB pre-touched java heap, `-Xss2M`, on x64, will consume:
> 
> A) Baseline (THP disabled on system):  *369 MB*
> B) THP="always", JDK-8303215 present: *1.5 GB .. >2 GB* (very wobbly)
> C) THP="always", JDK-8303215 present, artificial delay after thread start: **20,6 GB** (!).
> 
> 
> #### Cause:
> 
> The problem is caused by timing. When we create multiple Java threads, the following sequence of actions happens:
> 
> In the parent thread:
> - the parent thread calls `pthread_create(3)`
> - `pthread_create(3)` creates the thread stack by calling `mmap(2)`
> - `pthread_create(3)` calls `clone(2)` to start the child thread
> - repeat to start more threads
> 
> Each child thread:
> - queries its stack dimensions
> - handshakes with the parent to signal lifeness
> - establishes guard pages at the low end of the stack
> 
> The thread stack mapping is established in the parent thread; the guard pages are placed by the child threads. There is a time window in which the thread stack is already mapped into address space, but guard pages still need to be placed.
> 
> If the parent is faster than the children, it will have created mappings faster than the children can place guard pages on them.
> 
> For the kernel, these thread stacks are just anonymous mappings. It places them adjacent to each ...

src/hotspot/os/linux/os_linux.cpp line 932:

> 930:   // into one VMA.
> 931:   if (PreventTHPsForThreadStacks) {
> 932:     guard_size = MAX2(guard_size, os::vm_page_size());

It would be helpful to add a comment clarifying that we don't add glibc guard pages for JavaThreads and Compiler threads (i.e. default_guard_size returns 0 for these threads), and when PreventTHPsforThreadStacks is true, JavaThreads and Compiler threads will have both the glibc guard page and JVM guard pages.

src/hotspot/os/linux/os_linux.cpp line 936:

> 934:     // so that the stack does not get backed by a transparent huge page.
> 935:     if (HugePages::thp_pagesize() > 0 &&
> 936:         is_aligned(stack_size, HugePages::thp_pagesize())) {

Do we want to add this additional page even when the stack_size is less than the thp_pagesize?

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/14919#discussion_r1268249799
PR Review Comment: https://git.openjdk.org/jdk/pull/14919#discussion_r1268251876


More information about the hotspot-runtime-dev mailing list