Large number of VMAs for large ZGC heap

Stefan Karlsson stefan.karlsson at oracle.com
Wed Apr 26 07:35:21 UTC 2023


On 2023-04-25 17:31, Thomas Stüfe wrote:
> Hi Stefan,
>
> thanks a lot for your answers. Wrt THPs, yes, it would be wise to use 
> explicit huge pages.
>
> Does the single ZUnmapper thread compete with all mutator threads for 
> the page allocator?

In most cases the mutator threads don't compete with the ZUnmapper 
thread (except for CPU time). However, if we need to allocate either a 
medium page or a large page, and we can't grow the heap more, and 
there's no large enough page in the page cache, then we gather a bunch 
of free pages from the page cache (i.e. page cache flushing) and "steal" 
the physical memory and assign it to a new virtual memory range of the 
required sized. Then we put the flushed pages onto the unmap queue and 
let the ZUnmapper thread deal with it. So, the manipulation of the unmap 
queue uses a lock and that lock is what the mutator and ZUnmapper thread 
competes for. I first thought that lock contention on this thread caused 
the issues we were seeing in our internal tests, but for us it seemed to 
be much more caused by the ZUnmapper thread not getting enough run time.

If you start to see messages about "Page Cache Flushed: " in the gc logs 
then you know that we have run the path described above.

StefanK

>
> Thanks, Thomas
>
>
>
>
> On Tue, Apr 25, 2023 at 2:59 PM Stefan Karlsson 
> <stefan.karlsson at oracle.com> wrote:
>
>     Hi Thomas,
>
>     On 2023-04-25 09:58, Thomas Stüfe wrote:
>>     Hi ZGC experts,
>>
>>     I see a strangeness with one of our customers running JDK 17 with
>>     ZGC, THP enabled (always), and a large heap of 4.6TB.
>
>     Side-note: be careful about using THP and expecting good
>     latencies, but if you do want to use THP with ZGC make sure to
>     also change:
>     |
>     /sys/kernel/mm/transparent_hugepage/shmem_enabled
>
>     https://wiki.openjdk.org/display/zgc
>     |
>>
>>     The number of VMAs exceeds 20 million. I try to understand
>>     whether that is normal or pathological.
>>
>>     Looking at maps, I see millions of adjacent VMAs that point into
>>     the heap to different offsets:
>>
>>     ```
>>     15fc5f600000-15fc5f800000 rw-s 24630400000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     15fc5f800000-15fc5fa00000 rw-s 2504e600000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     15fc5fa00000-15fc5fc00000 rw-s 25330000000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     15fc5fc00000-15fc5fe00000 rw-s 26324200000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     15fc5fe00000-15fc60000000 rw-s 26f03a00000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     ```
>>
>>     The different offsets prevent these mappings from being folded.
>>
>>     The number of mappings surpasses what would be needed to map the
>>     heap. Almost all are 2MB mappings:
>>
>>     Total number of mappings: 18634289
>>     Number of 2MB mappings:        18529201
>>     Per color: 6211420 / 6211429 / 6211439
>>
>>     The total address space covered by these 2MB mappings is 38TB.
>>     Taking into account the triple-mapping, we still map about 12TB
>>     per color. That far exceeds the necessary room for a 4.6TB heap.
>
>     ZGC reserves a larger address space for the heap than the given
>     max heap size. This is done to make it easier to deal with large
>     objects. There are some hints to the address space layout here:
>     https://github.com/openjdk/zgc/blob/5ea960728c5616373c986ae1343b44043c0db487/src/hotspot/cpu/x86/gc/z/zGlobals_x86.cpp
>     <https://urldefense.com/v3/__https://github.com/openjdk/zgc/blob/5ea960728c5616373c986ae1343b44043c0db487/src/hotspot/cpu/x86/gc/z/zGlobals_x86.cpp__;!!ACWV5N9M2RV99hQ!InKmrEgd37o1vph7b34heLsWF3cazBgBKiLbsBP-IeLQ63mezZbwtCFxatSe8E7vZkveYWnKulwj5PVczQe8Q4RzJKI$>
>
>>
>>     Examining the mappings, I see that many offsets into the heap are
>>     mapped to multiple points, even discounting the triple mapping.
>>     For example, offset 105fe800000 is mapped six times per color,
>>     for a total of 12 times:
>>
>>     13438de00000-13438e000000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     15bf79400000-15bf79600000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     165022800000-165022a00000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     16fdad200000-16fdad400000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     17b1b9600000-17b1b9800000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     1d9860000000-1d9860200000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>
>>     23438de00000-23438e000000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     25bf79400000-25bf79600000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     265022800000-265022a00000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     26fdad200000-26fdad400000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     27b1b9600000-27b1b9800000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     2d9860000000-2d9860200000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>
>>     43438de00000-43438e000000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     45bf79400000-45bf79600000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     465022800000-465022a00000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     46fdad200000-46fdad400000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     47b1b9600000-47b1b9800000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>     4d9860000000-4d9860200000 rw-s 105fe800000 00:0f 373323680      
>>             /memfd:java_heap.hugetlb (deleted)
>>
>
>     What I think happens here is that when we detach
>     virtual-to-physical memory mappings we don't do it immediately,
>     instead the memory is handed over to a separate ZUnmapper thread.
>     If that thread gets starved, typically because of an over
>     provisioned machine, then these mappings start to build up. You
>     can see the ZUnmapper code here:
>     https://github.com/openjdk/zgc/blob/5ea960728c5616373c986ae1343b44043c0db487/src/hotspot/share/gc/z/zUnmapper.cpp
>     <https://urldefense.com/v3/__https://github.com/openjdk/zgc/blob/5ea960728c5616373c986ae1343b44043c0db487/src/hotspot/share/gc/z/zUnmapper.cpp__;!!ACWV5N9M2RV99hQ!InKmrEgd37o1vph7b34heLsWF3cazBgBKiLbsBP-IeLQ63mezZbwtCFxatSe8E7vZkveYWnKulwj5PVczQe8MHiUbBs$>
>
>     I recently looked into this and thought that the starvation
>     happened because of how we take the lock for every ZPage we want
>     to unmap. I prototyped a way to bulk fetch all pages, but that
>     didn't seem to help. AFAICT, the big problem for us was still that
>     the ZUnmapper thread was starved out. The prototype is here:
>     https://github.com/stefank/jdk/tree/zgc_generational_bulk_unmapper
>     <https://urldefense.com/v3/__https://github.com/stefank/jdk/tree/zgc_generational_bulk_unmapper__;!!ACWV5N9M2RV99hQ!InKmrEgd37o1vph7b34heLsWF3cazBgBKiLbsBP-IeLQ63mezZbwtCFxatSe8E7vZkveYWnKulwj5PVczQe8DCq6qQQ$>
>
>     You can can actually see this problem if you monitor the amount of
>     committed memory in the Java heap. When this happens the reported
>     amount of committed memory increases and can even go past the max
>     heap size. This is a bug because of how report our virtual memory
>     to NMT. I created a bug for that:
>     https://bugs.openjdk.org/browse/JDK-8306841
>
>     And a prototype:
>     https://github.com/stefank/jdk/tree/zgc_generational_fix_nmt_overcommit_reporting
>     <https://urldefense.com/v3/__https://github.com/stefank/jdk/tree/zgc_generational_fix_nmt_overcommit_reporting__;!!ACWV5N9M2RV99hQ!InKmrEgd37o1vph7b34heLsWF3cazBgBKiLbsBP-IeLQ63mezZbwtCFxatSe8E7vZkveYWnKulwj5PVczQe8k4Vxlh4$>
>
>>     The ZGC Page table contains close to a million ZGC pages and
>>     looks okay for a heap of that size:
>>     Small: 739175
>>     Medium: 10160
>>     Large:   65495
>>                    -------
>>                     814830
>>
>>     My question: is such a high number of mappings for ZGC normal?
>
>     A larger number of mappings is normal, but what you have above
>     indicates some kind of performance issue with the system.
>
>     Cheers,
>     StefanK
>
>>
>>     Thank you for your time,
>>
>>     Cheers, Thomas
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/zgc-dev/attachments/20230426/e15283d9/attachment-0001.htm>


More information about the zgc-dev mailing list