From leihouyju at gmail.com Sun Sep 15 15:51:19 2019 From: leihouyju at gmail.com (Haoyu Li) Date: Sun, 15 Sep 2019 23:51:19 +0800 Subject: AllocateHeapAt flag cannot take effect in ZGC Message-ID: Hi all, I'm using ZGC with the -XX:AllocateHeapAt=<*path*> flag, and the *path* is pointing to a mounted device. However, this flag seems not to work with ZGC because I find that JVM still allocates its heap at DRAM instead of the specified device. I'm wondering is there any workaround? Thanks! Best Regrads, Haoyu Li, Institute of Parallel and Distributed Systems(IPADS), School of Software, Shanghai Jiao Tong University From per.liden at oracle.com Sun Sep 15 18:39:05 2019 From: per.liden at oracle.com (Per Liden) Date: Sun, 15 Sep 2019 20:39:05 +0200 Subject: AllocateHeapAt flag cannot take effect in ZGC In-Reply-To: References: Message-ID: <5d021137-663f-9d70-9236-338ec68e2e37@oracle.com> Hi, On 9/15/19 5:51 PM, Haoyu Li wrote: > Hi all, > > I'm using ZGC with the -XX:AllocateHeapAt=<*path*> flag, and the *path* is > pointing to a mounted device. However, this flag seems not to work with ZGC > because I find that JVM still allocates its heap at DRAM instead of the > specified device. I'm wondering is there any workaround? Thanks! The -XX:AllocateHeapAt flag is currently not supported with ZGC. The -XX:ZPath flag has a very similar meaning for ZGC, but you can currently only point it to a tmpfs or hugetlbfs mount, not a DAX-enabled filesystems. There are no technical reasons why this can't be supported in ZGC, it just hasn't been implemented yet. cheers, Per From daweil1 at student.unimelb.edu.au Tue Sep 17 03:28:23 2019 From: daweil1 at student.unimelb.edu.au (Dawei Li) Date: Tue, 17 Sep 2019 11:28:23 +0800 Subject: Enquiries on Load Barrier Overhead Message-ID: Hi ZGC Team, When I read the Jfokus 2018 slides about ZGC I have a few questions about load barrier overhead at slide No 44. Question 1: How do you measure the execution overhead on SPECjbb 2015? How do you configure base line testing and correspondingly, overhead testing? Question 2: Is the *slow* *path* a source of overhead? Is it the *main* source? Looking forward to your reply! Thank you! Warm Regards, Dawei Li From per.liden at oracle.com Tue Sep 17 06:28:33 2019 From: per.liden at oracle.com (Per Liden) Date: Tue, 17 Sep 2019 08:28:33 +0200 Subject: Enquiries on Load Barrier Overhead In-Reply-To: References: Message-ID: <13b5e05b-b8b7-d8f5-e184-cd856e17e986@oracle.com> Hi, On 9/17/19 5:28 AM, Dawei Li wrote: > Hi ZGC Team, > > When I read the Jfokus 2018 slides about ZGC I have a few questions about > load barrier overhead at slide No 44. > > Question 1: How do you measure the execution overhead on SPECjbb 2015? How > do you configure base line testing and correspondingly, overhead testing? This is a bit tricky to measure, and there's been various attempts and approaches. In this particular case the ZGC load barrier was added to ParallelGC, with an always successful fast path check. The performance (benchmark score, instructions retired, etc) of this Franken-ParalleGC was then compared against vanilla ParallelGC. > > Question 2: Is the *slow* *path* a source of overhead? Is it the *main* > source? On a typical workload, the slow path is taken on the order of one in a million, so the overhead is dominated by the fast path. During marking, the slow path is on the order of 150 instructions, with an occasional cache miss when pushing to the mark stacks. During relocation, the slow path can be more expensive in the somewhat rare case where the mutator thread needs to relocate a medium sized object. /Per > > Looking forward to your reply! Thank you! > > Warm Regards, > Dawei Li > From m.sundar85 at gmail.com Fri Sep 20 01:54:29 2019 From: m.sundar85 at gmail.com (Sundara Mohan M) Date: Thu, 19 Sep 2019 18:54:29 -0700 Subject: Resident/Shared memory size is showing 3 times the given heap size Message-ID: Hi, We are running our server with ZGC and seeing the resident memory size is approximately 3 times the given heap size. I am a newbie still trying to understand basic concepts. Can someone help me understand this better 1. Is this expected? 2. At high level i know ZGC is having multiple view of memory region. Can you explain how this can happen? 3. Also i see something like this in /proc//maps ... 13fff9a00000-13fff9c00000 rw-s 5f8e00000 00:12 1488924473 /mnt/tmpfs/java_heap.10087 (deleted) 13fff9c00000-13fff9e00000 rw-s 5fb000000 00:12 1488924473 /mnt/tmpfs/java_heap.10087 (deleted) 13fff9e00000-13fffa000000 rw-s 603200000 00:12 1488924473 /mnt/tmpfs/java_heap.10087 (deleted) 13fffa000000-13fffa200000 rw-s 612c00000 00:12 1488924473 /mnt/tmpfs/java_heap.10087 (deleted) 13fffa200000-13fffa400000 rw-s 61a800000 00:12 1488924473 /mnt/tmpfs/java_heap.10087 (deleted) 13fffa400000-13fffa600000 rw-s 326000000 00:12 1488924473 /mnt/tmpfs/java_heap.10087 (deleted) 13fffa600000-13fffa800000 rw-s 5d3600000 00:12 1488924473 /mnt/tmpfs/java_heap.10087 (deleted) ... Why is this still in deleted state? 4. Trying to get heap dump with following command jmap -heap (tried with same user as well root but it is not printing heap) Is there something changed regarding dumping heap with ZGC? TIA Sundar From per.liden at oracle.com Fri Sep 20 07:43:26 2019 From: per.liden at oracle.com (Per Liden) Date: Fri, 20 Sep 2019 09:43:26 +0200 Subject: Resident/Shared memory size is showing 3 times the given heap size In-Reply-To: References: Message-ID: <31ec3fc7-dcad-7277-effa-2d97284abf02@oracle.com> Hi, On 9/20/19 3:54 AM, Sundara Mohan M wrote: > Hi, > We are running our server with ZGC and seeing the resident memory size > is approximately 3 times the given heap size. I am a newbie still trying to > understand basic concepts. Can someone help me understand this better > > 1. Is this expected? Yes > 2. At high level i know ZGC is having multiple view of memory region. Can > you explain how this can happen? Exactly. RSS (Resident set size) basically just says how much memory the process has mapped at the moment. It doesn't take into account that some of these mappings might be backed by the same memory, or might be are shared with other process, etc. A more interesting number, to get a better view of how much memory a process is actually using is PSS (Proportional set size). There are various tools that display PSS (smem, procrank, ps_mem.py, etc), but the raw data is available in /proc//smaps_rollup. For example, a JVM running ZGC with a 4G heap looks like this: $ cat /proc/4509/smaps_rollup 00400000-7ffd472e2000 ---p 00000000 00:00 0 [rollup] Rss: 12843604 kB Pss: 4451825 kB Shared_Clean: 3232 kB Shared_Dirty: 12582912 kB Private_Clean: 14380 kB Private_Dirty: 243080 kB Referenced: 12843604 kB Anonymous: 243048 kB LazyFree: 0 kB AnonHugePages: 0 kB ShmemPmdMapped: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB Locked: 0 kB Here the multi-mapping causes the RSS to be inflated by 3x, while PSS shows a more accurate number reflecting the fact that the three different heap mappings are backed by the same memory. > 3. Also i see something like this in /proc//maps > > ... > 13fff9a00000-13fff9c00000 rw-s 5f8e00000 00:12 1488924473 > /mnt/tmpfs/java_heap.10087 (deleted) > 13fff9c00000-13fff9e00000 rw-s 5fb000000 00:12 1488924473 > /mnt/tmpfs/java_heap.10087 (deleted) > 13fff9e00000-13fffa000000 rw-s 603200000 00:12 1488924473 > /mnt/tmpfs/java_heap.10087 (deleted) > 13fffa000000-13fffa200000 rw-s 612c00000 00:12 1488924473 > /mnt/tmpfs/java_heap.10087 (deleted) > 13fffa200000-13fffa400000 rw-s 61a800000 00:12 1488924473 > /mnt/tmpfs/java_heap.10087 (deleted) > 13fffa400000-13fffa600000 rw-s 326000000 00:12 1488924473 > /mnt/tmpfs/java_heap.10087 (deleted) > 13fffa600000-13fffa800000 rw-s 5d3600000 00:12 1488924473 > /mnt/tmpfs/java_heap.10087 (deleted) > ... > Why is this still in deleted state? The "(deleted)" state just means that the file has been unlinked. I.e. the file is still in use, but the directory entry /mnt/tmpfs/java_heap.10087 has been deleted (as it should be). > 4. Trying to get heap dump with following command > jmap -heap (tried with same user as well root but it is not printing > heap) > Is there something changed regarding dumping heap with ZGC? IIRC, jmap -heap was removed in JDK 9, use jcmd instead: $ jcmd GC.heap_dump cheers, Per > > TIA > Sundar > From alex at scalyr.com Fri Sep 20 16:54:26 2019 From: alex at scalyr.com (Alex Elent) Date: Fri, 20 Sep 2019 09:54:26 -0700 Subject: Resident/Shared memory size is showing 3 times the given heap size In-Reply-To: <31ec3fc7-dcad-7277-effa-2d97284abf02@oracle.com> References: <31ec3fc7-dcad-7277-effa-2d97284abf02@oracle.com> Message-ID: Hi Sundara, I had a similar question in January http://mail.openjdk.java.net/pipermail/zgc-dev/2019-January/000570.html. @Erik ?sterlund recommended Native Memory Tracking (NMT) and I've been using it with success. On Fri, Sep 20, 2019 at 12:44 AM Per Liden wrote: > Hi, > > On 9/20/19 3:54 AM, Sundara Mohan M wrote: > > Hi, > > We are running our server with ZGC and seeing the resident memory > size > > is approximately 3 times the given heap size. I am a newbie still trying > to > > understand basic concepts. Can someone help me understand this better > > > > 1. Is this expected? > > Yes > > > 2. At high level i know ZGC is having multiple view of memory region. Can > > you explain how this can happen? > > Exactly. RSS (Resident set size) basically just says how much memory the > process has mapped at the moment. It doesn't take into account that some > of these mappings might be backed by the same memory, or might be are > shared with other process, etc. A more interesting number, to get a > better view of how much memory a process is actually using is PSS > (Proportional set size). There are various tools that display PSS (smem, > procrank, ps_mem.py, etc), but the raw data is available in > /proc//smaps_rollup. > > For example, a JVM running ZGC with a 4G heap looks like this: > > $ cat /proc/4509/smaps_rollup > 00400000-7ffd472e2000 ---p 00000000 00:00 0 > [rollup] > Rss: 12843604 kB > Pss: 4451825 kB > Shared_Clean: 3232 kB > Shared_Dirty: 12582912 kB > Private_Clean: 14380 kB > Private_Dirty: 243080 kB > Referenced: 12843604 kB > Anonymous: 243048 kB > LazyFree: 0 kB > AnonHugePages: 0 kB > ShmemPmdMapped: 0 kB > Shared_Hugetlb: 0 kB > Private_Hugetlb: 0 kB > Swap: 0 kB > SwapPss: 0 kB > Locked: 0 kB > > Here the multi-mapping causes the RSS to be inflated by 3x, while PSS > shows a more accurate number reflecting the fact that the three > different heap mappings are backed by the same memory. > > > 3. Also i see something like this in /proc//maps > > > > ... > > 13fff9a00000-13fff9c00000 rw-s 5f8e00000 00:12 1488924473 > > /mnt/tmpfs/java_heap.10087 (deleted) > > 13fff9c00000-13fff9e00000 rw-s 5fb000000 00:12 1488924473 > > /mnt/tmpfs/java_heap.10087 (deleted) > > 13fff9e00000-13fffa000000 rw-s 603200000 00:12 1488924473 > > /mnt/tmpfs/java_heap.10087 (deleted) > > 13fffa000000-13fffa200000 rw-s 612c00000 00:12 1488924473 > > /mnt/tmpfs/java_heap.10087 (deleted) > > 13fffa200000-13fffa400000 rw-s 61a800000 00:12 1488924473 > > /mnt/tmpfs/java_heap.10087 (deleted) > > 13fffa400000-13fffa600000 rw-s 326000000 00:12 1488924473 > > /mnt/tmpfs/java_heap.10087 (deleted) > > 13fffa600000-13fffa800000 rw-s 5d3600000 00:12 1488924473 > > /mnt/tmpfs/java_heap.10087 (deleted) > > ... > > Why is this still in deleted state? > > The "(deleted)" state just means that the file has been unlinked. I.e. > the file is still in use, but the directory entry > /mnt/tmpfs/java_heap.10087 has been deleted (as it should be). > > > 4. Trying to get heap dump with following command > > jmap -heap (tried with same user as well root but it is not > printing > > heap) > > Is there something changed regarding dumping heap with ZGC? > > IIRC, jmap -heap was removed in JDK 9, use jcmd instead: > > $ jcmd GC.heap_dump > > cheers, > Per > > > > > TIA > > Sundar > > > From m.sundar85 at gmail.com Fri Sep 20 19:12:03 2019 From: m.sundar85 at gmail.com (Sundara Mohan M) Date: Fri, 20 Sep 2019 12:12:03 -0700 Subject: Resident/Shared memory size is showing 3 times the given heap size In-Reply-To: References: <31ec3fc7-dcad-7277-effa-2d97284abf02@oracle.com> Message-ID: Thanks Per and Alex. This helped a lot on understanding ZGC and process memory consumption better. Regards Sundar On Fri, Sep 20, 2019 at 9:54 AM Alex Elent wrote: > Hi Sundara, > > I had a similar question in January > http://mail.openjdk.java.net/pipermail/zgc-dev/2019-January/000570.html. > > @Erik ?sterlund recommended Native Memory > Tracking (NMT) and I've been using it with success. > > On Fri, Sep 20, 2019 at 12:44 AM Per Liden wrote: > >> Hi, >> >> On 9/20/19 3:54 AM, Sundara Mohan M wrote: >> > Hi, >> > We are running our server with ZGC and seeing the resident memory >> size >> > is approximately 3 times the given heap size. I am a newbie still >> trying to >> > understand basic concepts. Can someone help me understand this better >> > >> > 1. Is this expected? >> >> Yes >> >> > 2. At high level i know ZGC is having multiple view of memory region. >> Can >> > you explain how this can happen? >> >> Exactly. RSS (Resident set size) basically just says how much memory the >> process has mapped at the moment. It doesn't take into account that some >> of these mappings might be backed by the same memory, or might be are >> shared with other process, etc. A more interesting number, to get a >> better view of how much memory a process is actually using is PSS >> (Proportional set size). There are various tools that display PSS (smem, >> procrank, ps_mem.py, etc), but the raw data is available in >> /proc//smaps_rollup. >> >> For example, a JVM running ZGC with a 4G heap looks like this: >> >> $ cat /proc/4509/smaps_rollup >> 00400000-7ffd472e2000 ---p 00000000 00:00 0 >> [rollup] >> Rss: 12843604 kB >> Pss: 4451825 kB >> Shared_Clean: 3232 kB >> Shared_Dirty: 12582912 kB >> Private_Clean: 14380 kB >> Private_Dirty: 243080 kB >> Referenced: 12843604 kB >> Anonymous: 243048 kB >> LazyFree: 0 kB >> AnonHugePages: 0 kB >> ShmemPmdMapped: 0 kB >> Shared_Hugetlb: 0 kB >> Private_Hugetlb: 0 kB >> Swap: 0 kB >> SwapPss: 0 kB >> Locked: 0 kB >> >> Here the multi-mapping causes the RSS to be inflated by 3x, while PSS >> shows a more accurate number reflecting the fact that the three >> different heap mappings are backed by the same memory. >> >> > 3. Also i see something like this in /proc//maps >> > >> > ... >> > 13fff9a00000-13fff9c00000 rw-s 5f8e00000 00:12 1488924473 >> > /mnt/tmpfs/java_heap.10087 (deleted) >> > 13fff9c00000-13fff9e00000 rw-s 5fb000000 00:12 1488924473 >> > /mnt/tmpfs/java_heap.10087 (deleted) >> > 13fff9e00000-13fffa000000 rw-s 603200000 00:12 1488924473 >> > /mnt/tmpfs/java_heap.10087 (deleted) >> > 13fffa000000-13fffa200000 rw-s 612c00000 00:12 1488924473 >> > /mnt/tmpfs/java_heap.10087 (deleted) >> > 13fffa200000-13fffa400000 rw-s 61a800000 00:12 1488924473 >> > /mnt/tmpfs/java_heap.10087 (deleted) >> > 13fffa400000-13fffa600000 rw-s 326000000 00:12 1488924473 >> > /mnt/tmpfs/java_heap.10087 (deleted) >> > 13fffa600000-13fffa800000 rw-s 5d3600000 00:12 1488924473 >> > /mnt/tmpfs/java_heap.10087 (deleted) >> > ... >> > Why is this still in deleted state? >> >> The "(deleted)" state just means that the file has been unlinked. I.e. >> the file is still in use, but the directory entry >> /mnt/tmpfs/java_heap.10087 has been deleted (as it should be). >> >> > 4. Trying to get heap dump with following command >> > jmap -heap (tried with same user as well root but it is not >> printing >> > heap) >> > Is there something changed regarding dumping heap with ZGC? >> >> IIRC, jmap -heap was removed in JDK 9, use jcmd instead: >> >> $ jcmd GC.heap_dump >> >> cheers, >> Per >> >> > >> > TIA >> > Sundar >> > >> > From stefan.reich.maker.of.eye at googlemail.com Mon Sep 23 12:14:21 2019 From: stefan.reich.maker.of.eye at googlemail.com (Stefan Reich) Date: Mon, 23 Sep 2019 14:14:21 +0200 Subject: The Linux process size reporting issue Message-ID: I'm still on it... I _really_ need to know how big my processes are, and I can't with ZGC. https://superuser.com/questions/1485370/linux-misreports-process-size-with-heap-multi-mapping Maybe take this to a kernel mailing list? Greetings, Stefan -- Stefan Reich BotCompany.de // Java-based operating systems From per.liden at oracle.com Mon Sep 23 13:12:30 2019 From: per.liden at oracle.com (Per Liden) Date: Mon, 23 Sep 2019 15:12:30 +0200 Subject: The Linux process size reporting issue In-Reply-To: References: Message-ID: This question came up just a couple of days ago, please see this thread: https://mail.openjdk.java.net/pipermail/zgc-dev/2019-September/000731.html cheers, Per On 9/23/19 2:14 PM, Stefan Reich wrote: > I'm still on it... I _really_ need to know how big my processes are, and I > can't with ZGC. > > https://superuser.com/questions/1485370/linux-misreports-process-size-with-heap-multi-mapping > > Maybe take this to a kernel mailing list? > > Greetings, > Stefan > From pme at activeviam.com Thu Sep 26 10:32:18 2019 From: pme at activeviam.com (Pierre Mevel) Date: Thu, 26 Sep 2019 12:32:18 +0200 Subject: Metaspace Threshold nullifying warmup cycles and configuration questions Message-ID: Good Morning, I have been trying out ZGC on our application for a few months now, and have a few questions. For the record, I work on an in-memory OLAP database, and we are very interested by ZGC's promises on TBs heaps. The following observations are made on a E64_v3 azure server, 64vCPUs and 432 Gb of RAM. These are spread with 210Gb for the Heap, and 210Gb for the Off-Heap, which is almost entirely filled with the database content. ConcGcThreads is set higher than default to 20. First, we very quickly fill the Metaspace, which triggers the first three GCs (Metaspace GC Threshold cause) on the application. These three arrive very quickly, and unfortunately they also nullify the use of the warm-up cycles, and the proactive rule of 10%, 20% and 30% heap usage GCs. As the application doesn't do much at first (distributed OLAP cubes are discovering each other), the next GC cycle is triggered by the proactive rule of 5 minutes. At this point, the GC cycle statistics give: [info ] GC(3) Mark Start Mark End Relocate Start Relocate End High Low [info ] GC(3) Capacity: 215040M (100%) 215040M (100%) 215040M (100%) 215040M (100%) 215040M (100%) 215040M (100%) [info ] GC(3) Reserve: 110M (0%) 110M (0%) 110M (0%) 110M (0%) 110M (0%) 110M (0%) [info ] GC(3) Free: 213860M (99%) 213854M (99%) 213850M (99%) 214778M (100%) 214778M (100%) 213742M (99%) [info ] GC(3) Used: 1070M (0%) 1076M (1%) 1080M (1%) 152M (0%) 1188M (1%) 152M (0%) These cycles have a weird effect: MaxDurationOfGC is super low/rapide, and there is no warm-up cycles on the application. After this very fast cycle (GC(3)), the application runs its course. The allocation rate average over 10 minutes (less time than that occured) was around 100Mb/s, and it bumps quite significantly afterward, but we hit again a proactive cycle after 5 minutes. That will be the last proactive one. [info ] GC(4) Mark Start Mark End Relocate Start Relocate End High Low [info ] GC(4) Capacity: 215040M (100%) 215040M (100%) 215040M (100%) 215040M (100%) 215040M (100%) 215040M (100%) [info ] GC(4) Reserve: 110M (0%) 110M (0%) 110M (0%) 110M (0%) 110M (0%) 110M (0%) [info ] GC(4) Free: 193266M (90%) 192580M (90%) 185320M (86%) 203730M (95%) 203734M (95%) 185180M (86%) [info ] GC(4) Used: 21664M (10%) 22350M (10%) 29610M (14%) 11200M (5%) 29750M (14%) 11196M (5%) The allocation rate average over 10s is now over 1000Mb/s, and MaxDurationOfGc is 16.8s. And now the issues arrive. Because the MaxDurationOfGc is "low", the calculated TimeUntilGC is artificially bumped up, and the next cycle happens like this: [info ] GC(5) Mark Start Mark End Relocate Start Relocate End High Low [info ] GC(5) Capacity: 215040M (100%) 215040M (100%) 215040M (100%) 215040M (100%) 215040M (100%) 215040M (100%) [info ] GC(5) Reserve: 110M (0%) 110M (0%) 110M (0%) 110M (0%) 110M (0%) 32M (0%) [info ] GC(5) Free: 45490M (21%) 39798M (19%) 0M (0%) 140128M (65%) 140152M (65%) 0M (0%) [info ] GC(5) Used: 169440M (79%) 175132M (81%) 214930M (100%) 74802M (35%) 215008M (100%) 74778M (35%) GC(5)'s cause is Allocation Rate, as will be the cause of every cycle after this. It hits [info ] GC(5) Relocation: Incomplete as well, bumps MaxDurationOfGc to 107s and gets Allocation Stalls across all threads during the Concurrent Relocate Phase. When it finished, the next debug log line reads: [debug] Allocation Rate: 1300.000MB/s, Avg: 1178.000(+/-15.369)MB/s [debug] Rule: Allocation Rate, MaxAllocRate: 3348.971MB/s, Free: 140102MB, MaxDurationOfGC: 107.356s, TimeUntilGC: -65.622s . I feel like these very long GC cycles at the beginning of the application's run are just a consequence of the Metaspace GC cyles running first. I don't really know how Metaspace quickly fills for other applications. I solved this by setting the -XX:MetaspaceSize flag to a little over what we use during a run, but I wanted to bring this to light, in case it is not a wanted behavior. During the run, on a particularly intensive workload, I can get lines such as: [debug] Allocation Rate: 15420.000MB/s, Avg: 3670.000(+/-1195.062)MB/s [debug] Rule: Allocation Rate, MaxAllocRate: 14208.385MB/s, Free: 81780MB, MaxDurationOfGC: 11.849s, TimeUntilGC: -6.193s right after a GC, with the Allocation Rate that is superior to the MaxAllocRate. This is problematic, because the average allocation rate is artificially lowered by the Allocation Stalls that happened during the precedent cycle. I am unsure how to configure the GC to get better results. I am currently trying to up both the amount of concurrent threads for ZGC and the ZAllocationSpikeTolerance, to trigger GC cycles earlier and have them complete faster, at the expense of application speed. Is this a correct way to proceed? Thank you very much in advance for your time and answers. Best Regards, Pierre M?vel pierre.mevel at activeviam.com ActiveViam 46 rue de l'arbre sec, 75001 Paris From per.liden at oracle.com Fri Sep 27 12:04:43 2019 From: per.liden at oracle.com (Per Liden) Date: Fri, 27 Sep 2019 14:04:43 +0200 Subject: Metaspace Threshold nullifying warmup cycles and configuration questions In-Reply-To: References: Message-ID: Hi Pierre, On 9/26/19 12:32 PM, Pierre Mevel wrote: > Good Morning, > > I have been trying out ZGC on our application for a few months now, and > have a few questions. For the record, I work on an in-memory OLAP database, > and we are very interested by ZGC's promises on TBs heaps. > > The following observations are made on a E64_v3 azure server, 64vCPUs and > 432 Gb of RAM. These are spread with 210Gb for the Heap, and 210Gb for the > Off-Heap, which is almost entirely filled with the database content. > ConcGcThreads is set higher than default to 20. > > > First, we very quickly fill the Metaspace, which triggers the first three > GCs (Metaspace GC Threshold cause) on the application. These three arrive > very quickly, and unfortunately they also nullify the use of the warm-up > cycles, and the proactive rule of 10%, 20% and 30% heap usage GCs. > > As the application doesn't do much at first (distributed OLAP cubes are > discovering each other), the next GC cycle is triggered by the proactive > rule of 5 minutes. At this point, the GC cycle statistics give: > [info ] GC(3) Mark Start Mark End > Relocate Start Relocate End High > Low > [info ] GC(3) Capacity: 215040M (100%) 215040M (100%) 215040M > (100%) 215040M (100%) 215040M (100%) 215040M (100%) > [info ] GC(3) Reserve: 110M (0%) 110M (0%) > 110M (0%) 110M (0%) 110M (0%) > 110M (0%) > [info ] GC(3) Free: 213860M (99%) 213854M (99%) > 213850M (99%) 214778M (100%) 214778M (100%) 213742M (99%) > > [info ] GC(3) Used: 1070M (0%) 1076M (1%) > 1080M (1%) 152M (0%) 1188M (1%) > 152M (0%) > > These cycles have a weird effect: MaxDurationOfGC is super low/rapide, and > there is no warm-up cycles on the application. > > After this very fast cycle (GC(3)), the application runs its course. The > allocation rate average over 10 minutes (less time than that occured) was > around 100Mb/s, and it bumps quite significantly afterward, but we hit > again a proactive cycle after 5 minutes. That will be the last proactive > one. > > [info ] GC(4) Mark Start Mark > End Relocate Start Relocate End > High Low > [info ] GC(4) Capacity: 215040M (100%) 215040M (100%) > 215040M (100%) 215040M (100%) 215040M (100%) 215040M > (100%) > [info ] GC(4) Reserve: 110M (0%) 110M (0%) > 110M (0%) 110M (0%) 110M (0%) > 110M (0%) > [info ] GC(4) Free: 193266M (90%) 192580M (90%) > 185320M (86%) 203730M (95%) 203734M (95%) 185180M > (86%) > [info ] GC(4) Used: 21664M (10%) 22350M (10%) > 29610M (14%) 11200M (5%) 29750M (14%) > 11196M (5%) > > The allocation rate average over 10s is now over 1000Mb/s, and > MaxDurationOfGc is 16.8s. And now the issues arrive. Because the > MaxDurationOfGc is "low", the calculated TimeUntilGC is artificially bumped > up, and the next cycle happens like this: > > [info ] GC(5) Mark Start Mark End > Relocate Start Relocate End > High Low > [info ] GC(5) Capacity: 215040M (100%) 215040M (100%) 215040M > (100%) 215040M (100%) 215040M (100%) 215040M (100%) > [info ] GC(5) Reserve: 110M (0%) 110M (0%) > 110M (0%) 110M (0%) 110M (0%) > 32M (0%) > [info ] GC(5) Free: 45490M (21%) 39798M (19%) > 0M (0%) 140128M (65%) 140152M (65%) > 0M (0%) > [info ] GC(5) Used: 169440M (79%) 175132M (81%) > 214930M (100%) 74802M (35%) 215008M (100%) > 74778M (35%) > > GC(5)'s cause is Allocation Rate, as will be the cause of every cycle after > this. It hits [info ] GC(5) Relocation: Incomplete as well, bumps > MaxDurationOfGc to 107s and gets Allocation Stalls across all threads > during the Concurrent Relocate Phase. > > When it finished, the next debug log line reads: > [debug] Allocation Rate: 1300.000MB/s, Avg: 1178.000(+/-15.369)MB/s > [debug] Rule: Allocation Rate, MaxAllocRate: 3348.971MB/s, Free: 140102MB, > MaxDurationOfGC: 107.356s, TimeUntilGC: -65.622s > . > > I feel like these very long GC cycles at the beginning of the application's > run are just a consequence of the Metaspace GC cyles running first. > I don't really know how Metaspace quickly fills for other applications. I > solved this by setting the -XX:MetaspaceSize flag to a little over what we > use during a run, but I wanted to bring this to light, in case it is not a > wanted behavior. Thanks for the feedback. It indeed sounds like the early Metaspace GC cycles fool the heuristics here. Am I interpreting your last paragraph above correctly in that adjusting MetaspaceSize solves this part of the problem for you? If so, there are a few ways in which we could improve this on our side. For example, we might not want to do Metaspace GCs before warmup has completed, or only do Metaspace GCs when we fail to expand Metaspace. > > > During the run, on a particularly intensive workload, I can get lines such > as: > [debug] Allocation Rate: 15420.000MB/s, Avg: 3670.000(+/-1195.062)MB/s > [debug] Rule: Allocation Rate, MaxAllocRate: 14208.385MB/s, Free: 81780MB, > MaxDurationOfGC: 11.849s, TimeUntilGC: -6.193s > > right after a GC, with the Allocation Rate that is superior to the > MaxAllocRate. This is problematic, because the average allocation rate is > artificially lowered by the Allocation Stalls that happened during the > precedent cycle. The sampling windows for the allocation rate is 1 second (with 10 samples). So it can take up to 1 second before a phase shift is clearly reflected in the average. This is to avoid making the GC too nervous in case an spike in allocation rate is not persistent. > > I am unsure how to configure the GC to get better results. I am currently > trying to up both the amount of concurrent threads for ZGC and the > ZAllocationSpikeTolerance, to trigger GC cycles earlier and have them > complete faster, at the expense of application speed. > > Is this a correct way to proceed? It depends a bit on what the situation looks like when you get the allocation stalls. If the GC is running back-to-back and you still get allocation stalls, then your only option is to increase the heap size and/or concurrent GC threads. However, if the system is not doing GCs back-to-back and you still get allocation stalls, then is sounds more like a heuristics issue. If you're on JDK 13, a new option you could play with is -XX:SoftMaxHeapSize. Setting this to e.g. 75% of -Xmx will have the effect of GC starting to collect garbage earlier, increasing the safety margin to allocation stalls and making it more resilient to the heuristics issue. This option can be particularly useful in situations where the allocation rate fluctuates a lot, which can sometimes fool the heuristics. Increasing the ZAllocationSpikeTolerance will also give the GC a larger safety margin. However, this tolerance is relative to the current allocation rate, so it might not work that well if the allocation rate is fluctuating a lot. cheers, Per > > > > Thank you very much in advance for your time and answers. > Best Regards, > > Pierre M?vel > pierre.mevel at activeviam.com > ActiveViam > 46 rue de l'arbre sec, 75001 Paris > From Barry.Galster at imc.com Fri Sep 27 15:52:00 2019 From: Barry.Galster at imc.com (Barry Galster) Date: Fri, 27 Sep 2019 15:52:00 +0000 Subject: JVM crash with stack overflow on 11.0.4 using ZGC Message-ID: <15F0534C-8521-4479-831F-1FAB91508F3E@imc.com> Hello Per and ZGC experts, We have encountered a periodic crash which shares many attributes with this previous posting: http://mail.openjdk.java.net/pipermail/zgc-dev/2018-May/000336.html 1. The stack is copied below along with the version information and relevant jvm flags. 2. The crash occurs during ~50% of the executions for the past week (we have not yet created a repeatable test harness). 3. We will attempt to reproduce the crash with jdk13 and appreciate any other recommendations. Regards, Barry -- ZGC/memory JVM flags: -server -Xms64G -Xmx64G -Xmn3G -XX:+UnlockExperimentalVMOptions -XX:+UseZGC -XX:+AlwaysPreTouch -XX:ConcGCThreads=12 -XX:+UseLargePages -XX:LargePageSizeInBytes=2m -XX:ZPath=/dev/hugepages -XX:+UseCodeCacheFlushing -Xlog:gc*=info,gc+age=trace,safepoint=debug:file=gc.log:time,level,tags openjdk version "11.0.4" 2019-07-16 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.4+11) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.4+11, mixed mode) Program terminated with signal 11, Segmentation fault. #0 0x00007fc1557f9881 in _dl_update_slotinfo () from /lib64/ld-linux-x86-64.so.2 #1 0x00007fc1557e8078 in update_get_addr () from /lib64/ld-linux-x86-64.so.2 #2 0x00007fc1557fe928 in __tls_get_addr () from /lib64/ld-linux-x86-64.so.2 #3 0x00007fc153e08e40 in LoadBarrierNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so #4 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so #5 0x00007fc153e08eba in LoadBarrierNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so #6 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so ... ... these two lines repeat ~25k times until the bottom of the stack: #25290 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25291 0x00007fc153e08eba in LoadBarrierNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25292 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25293 0x00007fc153e08eba in LoadBarrierNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25294 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25295 0x00007fc153c03503 in PhaseIterGVN::register_new_node_with_optimizer(Node*, Node*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25296 0x00007fc153e082c6 in clone_load_barrier(PhaseIdealLoop*, LoadBarrierNode*, Node*, Node*, Node*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25297 0x00007fc153e1225d in ZBarrierSetC2::loop_optimize_gc_barrier(PhaseIdealLoop*, Node*, bool) () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25298 0x00007fc153a5964f in PhaseIdealLoop::split_if_with_blocks(VectorSet&, Node_Stack&, bool) () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25299 0x00007fc153a53216 in PhaseIdealLoop::build_and_optimize(bool, bool, bool) () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25300 0x00007fc1535db4a3 in Compile::Optimize() () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25301 0x00007fc1535dc96a in Compile::Compile(ciEnv*, C2Compiler*, ciMethod*, int, bool, bool, bool, DirectiveSet*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25302 0x00007fc1534f26bc in C2Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25303 0x00007fc1535e6a2d in CompileBroker::invoke_compiler_on_method(CompileTask*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25304 0x00007fc1535e81d8 in CompileBroker::compiler_thread_loop() () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25305 0x00007fc153d69423 in JavaThread::thread_main_inner() () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25306 0x00007fc153d696f5 in JavaThread::run() () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25307 0x00007fc153d653aa in Thread::call_run() () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25308 0x00007fc153badc9e in thread_native_entry(Thread*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so #25309 0x00007fc155120dd5 in start_thread () from /lib64/libpthread.so.0 #25310 0x00007fc154a34ead in clone () from /lib64/libc.so.6 Barry Galster Performance Engineer - Team Lead T +13122047574 E Barry.Galster at imc.com 233 South Wacker Drive # 4250, Chicago, Illinois 60606, US [IMC Logo] [F] [t] [I] [in] imc.com ________________________________ The information in this e-mail is intended only for the person or entity to which it is addressed. It may contain confidential and /or privileged material, the disclosure of which is prohibited. Any unauthorized copying, disclosure or distribution of the information in this email outside your company is strictly forbidden. If you are not the intended recipient (or have received this email in error), please contact the sender immediately and permanently delete all copies of this email and any attachments from your computer system and destroy any hard copies. Although the information in this email has been compiled with great care, neither IMC nor any of its related entities shall accept any responsibility for any errors, omissions or other inaccuracies in this information or for the consequences thereof, nor shall it be bound in any way by the contents of this e-mail or its attachments. Messages and attachments are scanned for all known viruses. Always scan attachments before opening them. From peter_booth at me.com Sat Sep 28 20:03:59 2019 From: peter_booth at me.com (Peter Booth) Date: Sat, 28 Sep 2019 16:03:59 -0400 Subject: JVM crash with stack overflow on 11.0.4 using ZGC In-Reply-To: <15F0534C-8521-4479-831F-1FAB91508F3E@imc.com> References: <15F0534C-8521-4479-831F-1FAB91508F3E@imc.com> Message-ID: I suggest running with -XX:+PrintCompilation, and also -XX:+TraceDeoptimization flags. If the crash is always Shem compiling the same method/Class then you could try flagging that to not be compiled, as a short term workaround Sent from my iPhone > On Sep 27, 2019, at 11:52 AM, Barry Galster wrote: > > Hello Per and ZGC experts, > We have encountered a periodic crash which shares many attributes with this previous posting: http://mail.openjdk.java.net/pipermail/zgc-dev/2018-May/000336.html > > 1. The stack is copied below along with the version information and relevant jvm flags. > 2. The crash occurs during ~50% of the executions for the past week (we have not yet created a repeatable test harness). > 3. We will attempt to reproduce the crash with jdk13 and appreciate any other recommendations. > > Regards, > Barry > > -- > ZGC/memory JVM flags: > -server -Xms64G -Xmx64G -Xmn3G -XX:+UnlockExperimentalVMOptions -XX:+UseZGC -XX:+AlwaysPreTouch -XX:ConcGCThreads=12 -XX:+UseLargePages -XX:LargePageSizeInBytes=2m -XX:ZPath=/dev/hugepages -XX:+UseCodeCacheFlushing -Xlog:gc*=info,gc+age=trace,safepoint=debug:file=gc.log:time,level,tags > > openjdk version "11.0.4" 2019-07-16 > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.4+11) > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.4+11, mixed mode) > > Program terminated with signal 11, Segmentation fault. > > #0 0x00007fc1557f9881 in _dl_update_slotinfo () from /lib64/ld-linux-x86-64.so.2 > #1 0x00007fc1557e8078 in update_get_addr () from /lib64/ld-linux-x86-64.so.2 > #2 0x00007fc1557fe928 in __tls_get_addr () from /lib64/ld-linux-x86-64.so.2 > #3 0x00007fc153e08e40 in LoadBarrierNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #4 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #5 0x00007fc153e08eba in LoadBarrierNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #6 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so > ... > ... > these two lines repeat ~25k times until the bottom of the stack: > > #25290 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25291 0x00007fc153e08eba in LoadBarrierNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25292 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25293 0x00007fc153e08eba in LoadBarrierNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25294 0x00007fc153b57abf in ProjNode::bottom_type() const () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25295 0x00007fc153c03503 in PhaseIterGVN::register_new_node_with_optimizer(Node*, Node*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25296 0x00007fc153e082c6 in clone_load_barrier(PhaseIdealLoop*, LoadBarrierNode*, Node*, Node*, Node*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25297 0x00007fc153e1225d in ZBarrierSetC2::loop_optimize_gc_barrier(PhaseIdealLoop*, Node*, bool) () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25298 0x00007fc153a5964f in PhaseIdealLoop::split_if_with_blocks(VectorSet&, Node_Stack&, bool) () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25299 0x00007fc153a53216 in PhaseIdealLoop::build_and_optimize(bool, bool, bool) () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25300 0x00007fc1535db4a3 in Compile::Optimize() () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25301 0x00007fc1535dc96a in Compile::Compile(ciEnv*, C2Compiler*, ciMethod*, int, bool, bool, bool, DirectiveSet*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25302 0x00007fc1534f26bc in C2Compiler::compile_method(ciEnv*, ciMethod*, int, DirectiveSet*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25303 0x00007fc1535e6a2d in CompileBroker::invoke_compiler_on_method(CompileTask*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25304 0x00007fc1535e81d8 in CompileBroker::compiler_thread_loop() () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25305 0x00007fc153d69423 in JavaThread::thread_main_inner() () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25306 0x00007fc153d696f5 in JavaThread::run() () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25307 0x00007fc153d653aa in Thread::call_run() () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25308 0x00007fc153badc9e in thread_native_entry(Thread*) () from /opt/openjdk-11.0.4/lib/server/libjvm.so > #25309 0x00007fc155120dd5 in start_thread () from /lib64/libpthread.so.0 > #25310 0x00007fc154a34ead in clone () from /lib64/libc.so.6 > > Barry Galster > Performance Engineer - Team Lead > T +13122047574 > E Barry.Galster at imc.com > 233 South Wacker Drive # 4250, > Chicago, Illinois 60606, US > [IMC Logo] > > [F] > > > [t] > > > [I] > > > [in] > > > imc.com > > > > ________________________________ > > The information in this e-mail is intended only for the person or entity to which it is addressed. > > It may contain confidential and /or privileged material, the disclosure of which is prohibited. Any unauthorized copying, disclosure or distribution of the information in this email outside your company is strictly forbidden. > > If you are not the intended recipient (or have received this email in error), please contact the sender immediately and permanently delete all copies of this email and any attachments from your computer system and destroy any hard copies. Although the information in this email has been compiled with great care, neither IMC nor any of its related entities shall accept any responsibility for any errors, omissions or other inaccuracies in this information or for the consequences thereof, nor shall it be bound in any way by the contents of this e-mail or its attachments. > > Messages and attachments are scanned for all known viruses. Always scan attachments before opening them. From pme at activeviam.com Mon Sep 30 07:32:27 2019 From: pme at activeviam.com (Pierre Mevel) Date: Mon, 30 Sep 2019 09:32:27 +0200 Subject: Metaspace Threshold nullifying warmup cycles and configuration questions Message-ID: Thanks for the feedback. It indeed sounds like the early Metaspace GC > cycles fool the heuristics here. Am I interpreting your last paragraph > above correctly in that adjusting MetaspaceSize solves this part of the > problem for you? > If so, there are a few ways in which we could improve this on our side. > For example, we might not want to do Metaspace GCs before warmup has > completed, or only do Metaspace GCs when we fail to expand Metaspace. Indeed, I checked tonight's runs and they get the three warm up cycles as I expected. So the only prerequisite to fixing this issue was knowing how much metaspace we typically fill, and set the initial size higher than this amount. The metaspace GCs were extremely quick, as they happened at the beginning of the application's run, not much to collect, so in my use case I think I would not do these cycles before warm up has been completed. The sampling windows for the allocation rate is 1 second (with 10 > samples). So it can take up to 1 second before a phase shift is clearly > reflected in the average. This is to avoid making the GC too nervous in > case an spike in allocation rate is not persistent. > It depends a bit on what the situation looks like when you get the > allocation stalls. If the GC is running back-to-back and you still get > allocation stalls, then your only option is to increase the heap size > and/or concurrent GC threads. Alright thanks for the info. For now we are having GC cycles running back to back. I'm still increasing concurrent threads to see how it goes, but the goal was not to increase heap size when compared with our current G1 configuration. What happens if ConcGcThreads is equal to the vCPUs count? Does it run as if it was parallel, or do the threads still share the CPUs with the application threads ? Or put in another way, if I set ConcGcThreads to 20 on a 20 vCPUs machine, will I get something similar to a stop the world? Or if I have 200 application threads, will the GC only get 10% of the CPU time? However, if the system is not doing GCs back-to-back and you still get > allocation stalls, then is sounds more like a heuristics issue. If > you're on JDK 13, a new option you could play with is > -XX:SoftMaxHeapSize. Setting this to e.g. 75% of -Xmx will have the > effect of GC starting to collect garbage earlier, increasing the safety > margin to allocation stalls and making it more resilient to the > heuristics issue. This option can be particularly useful in situations > where the allocation rate fluctuates a lot, which can sometimes fool the > heuristics. > I did read about this option and it seems like it would be a much better way not to be tricked by allocation rate fluctuations, unfortunately we are running with JDK 11 for now. I will try it on the side though. Thanks a lot for your quick answer. I shall keep you updated. Cheers, Pierre M?vel pierre.mevel at activeviam.com ActiveViam - Stagiaire 46 rue de l'arbre sec, 75001 Paris From per.liden at oracle.com Mon Sep 30 12:25:38 2019 From: per.liden at oracle.com (Per Liden) Date: Mon, 30 Sep 2019 14:25:38 +0200 Subject: Metaspace Threshold nullifying warmup cycles and configuration questions In-Reply-To: References: Message-ID: <5286de3b-ef1c-5604-877f-7bdb633f1ab1@oracle.com> On 9/30/19 9:32 AM, Pierre Mevel wrote: > Thanks for the feedback. It indeed sounds like the early Metaspace GC >> cycles fool the heuristics here. Am I interpreting your last paragraph >> above correctly in that adjusting MetaspaceSize solves this part of the >> problem for you? >> If so, there are a few ways in which we could improve this on our side. >> For example, we might not want to do Metaspace GCs before warmup has >> completed, or only do Metaspace GCs when we fail to expand Metaspace. > > > Indeed, I checked tonight's runs and they get the three warm up cycles as I > expected. So the only prerequisite to fixing this issue was knowing how > much metaspace we typically fill, and set the initial size higher than this > amount. > The metaspace GCs were extremely quick, as they happened at the beginning > of the application's run, not much to collect, so in my use case I think I > would not do these cycles before warm up has been completed. Ok, thanks for confirming. > > The sampling windows for the allocation rate is 1 second (with 10 >> samples). So it can take up to 1 second before a phase shift is clearly >> reflected in the average. This is to avoid making the GC too nervous in >> case an spike in allocation rate is not persistent. >> > It depends a bit on what the situation looks like when you get the >> allocation stalls. If the GC is running back-to-back and you still get >> allocation stalls, then your only option is to increase the heap size >> and/or concurrent GC threads. > > > Alright thanks for the info. For now we are having GC cycles running back > to back. I'm still increasing concurrent threads to see how it goes, but > the goal was not to increase heap size when compared with our current G1 > configuration. What happens if ConcGcThreads is equal to the vCPUs count? > Does it run as if it was parallel, or do the threads still share the CPUs > with the application threads ? The GC threads still share the CPUs with the application threads, and concurrent GC threads run with the same priority as application threads. > Or put in another way, if I set ConcGcThreads to 20 on a 20 vCPUs machine, > will I get something similar to a stop the world? It will not be similar to STW. The concurrent GC work will be interleaved with the application work at the OS thread scheduling level. > Or if I have 200 > application threads, will the GC only get 10% of the CPU time? You're at the mercy of the OS scheduler. Assuming fair scheduling, and assuming all application threads want to run all the time (i.e. they never block for I/O, or locks, etc), all threads will each get their share of the CPU. Of course, when low latency is a priority, you typically want to have a system that is sized such that the "max application load" doesn't utilize more than ~70% of the CPU. That helps avoid OS scheduling latency artifacts, etc. > > However, if the system is not doing GCs back-to-back and you still get >> allocation stalls, then is sounds more like a heuristics issue. If >> you're on JDK 13, a new option you could play with is >> -XX:SoftMaxHeapSize. Setting this to e.g. 75% of -Xmx will have the >> effect of GC starting to collect garbage earlier, increasing the safety >> margin to allocation stalls and making it more resilient to the >> heuristics issue. This option can be particularly useful in situations >> where the allocation rate fluctuates a lot, which can sometimes fool the >> heuristics. >> > > I did read about this option and it seems like it would be a much better > way not to be tricked by allocation rate fluctuations, unfortunately we are > running with JDK 11 for now. I will try it on the side though. > > Thanks a lot for your quick answer. I shall keep you updated. Ok, thanks! cheers, Per > > Cheers, > > Pierre M?vel > pierre.mevel at activeviam.com > ActiveViam - Stagiaire > 46 rue de l'arbre sec, 75001 Paris > From per.liden at oracle.com Mon Sep 30 12:44:43 2019 From: per.liden at oracle.com (per.liden at oracle.com) Date: Mon, 30 Sep 2019 12:44:43 +0000 Subject: hg: zgc/zgc: 1045 new changesets Message-ID: <201909301246.x8UCkM07021456@aojmv0008.oracle.com> Changeset: e53ec3b362f4 Author: ngasson Date: 2019-06-17 15:31 +0800 URL: https://hg.openjdk.java.net/zgc/zgc/rev/e53ec3b362f4 8224851: AArch64: fix warnings and errors with Clang and GCC 8.3 Reviewed-by: aph, kbarrett ! src/hotspot/cpu/aarch64/aarch64.ad ! src/hotspot/cpu/aarch64/assembler_aarch64.hpp ! src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/c1_LIRGenerator_aarch64.cpp ! src/hotspot/cpu/aarch64/frame_aarch64.cpp ! src/hotspot/cpu/aarch64/interp_masm_aarch64.hpp ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/macroAssembler_aarch64_log.cpp ! src/hotspot/cpu/aarch64/macroAssembler_aarch64_trig.cpp ! src/hotspot/cpu/aarch64/vm_version_aarch64.cpp ! src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp ! src/hotspot/os_cpu/linux_aarch64/copy_linux_aarch64.s ! src/hotspot/os_cpu/linux_aarch64/os_linux_aarch64.cpp Changeset: 46049b8a5658 Author: dfuchs Date: 2019-06-17 20:03 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/46049b8a5658 8225578: Replace wildcard address with loopback or local host in tests - part 16 Summary: Fixes java/net/Authenticator and java/net/CookieHandler to stop depending on the wildcard address, wherever possible. Reviewed-by: chegar ! test/jdk/java/net/Authenticator/AuthNPETest.java ! test/jdk/java/net/Authenticator/B4678055.java ! test/jdk/java/net/Authenticator/B4759514.java ! test/jdk/java/net/Authenticator/B4769350.java ! test/jdk/java/net/Authenticator/B4921848.java ! test/jdk/java/net/Authenticator/B4933582.java ! test/jdk/java/net/Authenticator/B4962064.java ! test/jdk/java/net/Authenticator/B6870935.java ! test/jdk/java/net/Authenticator/B8034170.java ! test/jdk/java/net/Authenticator/BasicTest.java ! test/jdk/java/net/Authenticator/BasicTest3.java ! test/jdk/java/net/Authenticator/BasicTest4.java ! test/jdk/java/net/Authenticator/Deadlock.java ! test/jdk/java/net/CookieHandler/CookieHandlerTest.java ! test/jdk/java/net/CookieHandler/CookieManagerTest.java ! test/jdk/java/net/CookieHandler/EmptyCookieHeader.java ! test/jdk/java/net/CookieHandler/LocalHostCookie.java Changeset: da554fdb51d0 Author: ysuenaga Date: 2019-06-18 10:54 +0900 URL: https://hg.openjdk.java.net/zgc/zgc/rev/da554fdb51d0 8225636: SA can't handle prelinked libraries Reviewed-by: sspitsyn, cjplummer ! src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c Changeset: 32cce302a1fd Author: rehn Date: 2019-06-18 11:06 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/32cce302a1fd 8226227: Missing include of thread.inline.hpp Reviewed-by: coleenp ! src/hotspot/share/gc/shared/gcLocker.inline.hpp ! src/hotspot/share/runtime/vframe.inline.hpp Changeset: b78af6d8a252 Author: chegar Date: 2019-06-18 14:52 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/b78af6d8a252 8225583: Examine the HttpResponse.BodySubscribers for null handling Reviewed-by: dfuchs, prappo ! src/java.net.http/share/classes/java/net/http/HttpResponse.java ! src/java.net.http/share/classes/jdk/internal/net/http/LineSubscriberAdapter.java ! src/java.net.http/share/classes/jdk/internal/net/http/ResponseSubscribers.java Changeset: 8d50ff464ae5 Author: rriggs Date: 2019-06-18 10:37 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/8d50ff464ae5 8226242: Diagnostic output for posix_spawn failure Reviewed-by: bpb, stuefe, dholmes, martin ! src/java.base/unix/native/libjava/ProcessImpl_md.c Changeset: 8259c22be42c Author: zgu Date: 2019-06-18 13:11 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/8259c22be42c 8225804: SA: Remove unused CollectedHeap.oopOffset() method Reviewed-by: rkennke ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shared/CollectedHeap.java ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/oops/ObjectHeap.java Changeset: 17f24c0e2f01 Author: chegar Date: 2019-06-18 18:38 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/17f24c0e2f01 8226319: Add forgotten test/jdk/java/net/httpclient/BodySubscribersTest.java Reviewed-by: dfuchs, prappo + test/jdk/java/net/httpclient/BodySubscribersTest.java Changeset: d69faba543ec Author: iignatyev Date: 2019-06-13 13:42 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/d69faba543ec 8225746: hotspot problem lists list unexciting tests Reviewed-by: kvn ! test/hotspot/jtreg/ProblemList-graal.txt ! test/hotspot/jtreg/ProblemList.txt ! test/jdk/ProblemList-graal.txt Changeset: bba34c350225 Author: mullan Date: 2019-06-13 17:49 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/bba34c350225 8225730: Add sun/security/pkcs11/tls/tls12/FipsModeTLS12.java to ProblemList for linux Reviewed-by: xuelei ! test/jdk/ProblemList.txt Changeset: 24872d367cb6 Author: kvn Date: 2019-06-13 17:18 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/24872d367cb6 8209590: compiler/compilercontrol/DontInlineCommandTest.java test fails with "Inline message differs" error Summary: increase InlineSmallCode to 4000 for tests which check inlining decisions. Reviewed-by: iignatyev ! test/hotspot/jtreg/compiler/compilercontrol/share/scenario/Command.java Changeset: c53db49c7a2f Author: jwilhelm Date: 2019-06-14 03:50 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/c53db49c7a2f Added tag jdk-13+25 for changeset 22b3b7983ada ! .hgtags Changeset: c3b354fdbaa4 Author: shade Date: 2019-06-14 10:02 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/c3b354fdbaa4 8225695: 32-bit build failures after JDK-8080462 (Update SunPKCS11 provider with PKCS11 v2.40 support) Reviewed-by: alanb ! src/jdk.crypto.cryptoki/share/native/libj2pkcs11/p11_general.c ! src/jdk.crypto.cryptoki/share/native/libj2pkcs11/p11_sign.c Changeset: 328d4a455e4b Author: xuelei Date: 2019-06-14 12:19 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/328d4a455e4b 8224829: AsyncSSLSocketClose.java has timing issue Reviewed-by: jnimeh, dfuchs ! src/java.base/share/classes/sun/security/ssl/SSLSocketImpl.java ! test/jdk/javax/net/ssl/SSLSocket/Tls13PacketSize.java + test/jdk/sun/security/ssl/SSLSocketImpl/BlockedAsyncClose.java Changeset: 55a79ffab804 Author: weijun Date: 2019-06-15 14:39 +0800 URL: https://hg.openjdk.java.net/zgc/zgc/rev/55a79ffab804 8225392: Comparison builds are failing due to cacerts file Reviewed-by: erikj, martin, mullan ! make/jdk/src/classes/build/tools/generatecacerts/GenerateCacerts.java ! src/java.base/share/classes/sun/security/tools/keytool/Main.java ! test/jdk/sun/security/lib/cacerts/VerifyCACerts.java + test/jdk/sun/security/tools/keytool/ListOrder.java Changeset: 22ce9e266a4b Author: zgu Date: 2019-06-14 12:08 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/22ce9e266a4b 8225801: Shenandoah: Adjust SA to reflect recent forwarding pointer changes Reviewed-by: shade ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shenandoah/ShenandoahHeap.java ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/gc/shenandoah/ShenandoahHeapRegion.java Changeset: 666f51a72171 Author: kvn Date: 2019-06-17 09:11 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/666f51a72171 8181837: [Graal] compiler/jvmci/SecurityRestrictionsTest.java fails with AccessControlException Summary: Remove test from Problem list because it does not fail anymore. Reviewed-by: iignatyev ! test/hotspot/jtreg/ProblemList-graal.txt Changeset: 2d62c1192d41 Author: dtitov Date: 2019-06-13 11:21 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/2d62c1192d41 8225543: Jcmd fails to attach to the Java process on Linux using the main class name if whitespace options were used to launch the process Reviewed-by: sspitsyn, dholmes ! src/jdk.jcmd/linux/classes/sun/tools/ProcessHelper.java ! test/jdk/sun/tools/jcmd/TestProcessHelper.java Changeset: 09ee0bd26bda Author: dtitov Date: 2019-06-17 14:31 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/09ee0bd26bda 8217348: assert(thread->is_Java_thread()) failed: just checking Reviewed-by: sspitsyn, dholmes, amenkov, jcbeyler ! src/hotspot/share/prims/jvmtiEnvBase.cpp Changeset: 6c2d53701e34 Author: rfield Date: 2019-06-17 17:14 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/6c2d53701e34 8200701: jdk/jshell/ExceptionsTest.java fails on Windows, after JDK-8198801 8159740: JShell: corralled declarations do not have correct source to wrapper mapping 8212167: JShell : Stack trace of exception has wrong line number Summary: Build corralled (recoverable undeclared definitions) declarations from position translating wraps.... Reviewed-by: jlahoda ! src/jdk.jshell/share/classes/jdk/jshell/Corraller.java ! src/jdk.jshell/share/classes/jdk/jshell/Eval.java ! src/jdk.jshell/share/classes/jdk/jshell/GeneralWrap.java ! src/jdk.jshell/share/classes/jdk/jshell/SourceCodeAnalysisImpl.java ! src/jdk.jshell/share/classes/jdk/jshell/Wrap.java ! test/langtools/ProblemList.txt ! test/langtools/jdk/jshell/ClassesTest.java ! test/langtools/jdk/jshell/ExceptionsTest.java ! test/langtools/jdk/jshell/KullaTesting.java ! test/langtools/jdk/jshell/WrapperTest.java Changeset: 922a4a554807 Author: rraghavan Date: 2019-06-18 10:00 +0530 URL: https://hg.openjdk.java.net/zgc/zgc/rev/922a4a554807 8226198: use of & instead of && in LibraryCallKit::arraycopy_restore_alloc_state Summary: Used logical operator correctly Reviewed-by: kvn, thartmann ! src/hotspot/share/opto/library_call.cpp Changeset: 3e08fa647eea Author: gziemski Date: 2019-06-18 12:39 -0500 URL: https://hg.openjdk.java.net/zgc/zgc/rev/3e08fa647eea 8225310: JFR crashed in JfrPeriodicEventSet::requestProtectionDomainCacheTableStatistics() Summary: Added lock around table usage Reviewed-by: coleenp, hseigel ! src/hotspot/share/classfile/systemDictionary.cpp Changeset: bc5a0508253c Author: jjg Date: 2019-06-18 11:52 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/bc5a0508253c 8225748: Use SHA-256 for javap classfile checksum Reviewed-by: mchung ! src/jdk.compiler/share/classes/com/sun/tools/javac/main/Main.java ! src/jdk.jdeps/share/classes/com/sun/tools/javap/JavapTask.java ! src/jdk.jdeps/share/classes/com/sun/tools/javap/resources/javap.properties ! test/langtools/tools/javac/T6942649.java ! test/langtools/tools/javap/T4884240.java Changeset: 688a2a361e14 Author: jwilhelm Date: 2019-06-18 22:48 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/688a2a361e14 Merge ! .hgtags Changeset: c439c469e803 Author: lancea Date: 2019-06-18 17:50 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/c439c469e803 8225680: Address links in java.sql.rowset Reviewed-by: jjg, bpb ! src/java.sql.rowset/share/classes/com/sun/rowset/providers/RIXMLProvider.java ! src/java.sql.rowset/share/classes/javax/sql/rowset/WebRowSet.java ! src/java.sql.rowset/share/classes/javax/sql/rowset/package-info.java Changeset: 970adfac768d Author: zgu Date: 2019-06-18 17:58 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/970adfac768d 8225573: Shenandoah: Enhance ShenandoahVerifier to ensure roots to-space invariant Reviewed-by: shade ! src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp ! src/hotspot/share/gc/shenandoah/shenandoahVerifier.cpp ! src/hotspot/share/gc/shenandoah/shenandoahVerifier.hpp Changeset: aa800530fb49 Author: pli Date: 2019-06-17 09:40 +0000 URL: https://hg.openjdk.java.net/zgc/zgc/rev/aa800530fb49 8226222: [JVMCI] Export AArch64 field VM_Version::_psr_info.dczid_el0 Reviewed-by: kvn ! src/hotspot/share/jvmci/jvmciCompilerToVMInit.cpp ! src/hotspot/share/jvmci/vmStructs_jvmci.cpp Changeset: 7cf925f385fe Author: pliden Date: 2019-06-19 08:43 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/7cf925f385fe 8225779: Remove unused CollectedHeap::cell_header_size() Reviewed-by: eosterlund, rkennke, shade ! src/hotspot/share/asm/assembler.cpp ! src/hotspot/share/gc/shared/collectedHeap.hpp Changeset: 4efe251009b4 Author: prappo Date: 2019-06-18 14:12 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/4efe251009b4 8226303: Examine the HttpRequest.BodyPublishers for exception handling Reviewed-by: chegar ! src/java.net.http/share/classes/jdk/internal/net/http/PullPublisher.java ! src/java.net.http/share/classes/jdk/internal/net/http/RequestPublishers.java + test/jdk/java/net/httpclient/RelayingPublishers.java Changeset: e0be41293b41 Author: prappo Date: 2019-06-19 12:17 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/e0be41293b41 Merge - src/jdk.crypto.cryptoki/share/native/libj2pkcs11/pkcs-11v2-20a3.h Changeset: e9da3a44a7ed Author: zgu Date: 2019-06-19 08:52 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/e9da3a44a7ed 8225582: Shenandoah: Enable concurrent evacuation of JNIHandles Reviewed-by: rkennke, shade + src/hotspot/share/gc/shenandoah/shenandoahConcurrentRoots.cpp + src/hotspot/share/gc/shenandoah/shenandoahConcurrentRoots.hpp ! src/hotspot/share/gc/shenandoah/shenandoahControlThread.cpp ! src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp ! src/hotspot/share/gc/shenandoah/shenandoahHeap.hpp ! src/hotspot/share/gc/shenandoah/shenandoahPhaseTimings.hpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.cpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.hpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.inline.hpp ! src/hotspot/share/gc/shenandoah/shenandoahWorkerPolicy.cpp ! src/hotspot/share/gc/shenandoah/shenandoahWorkerPolicy.hpp Changeset: d7da94e6c169 Author: aph Date: 2019-06-18 16:15 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/d7da94e6c169 8225716: G1 GC: Undefined behaviour in G1BlockOffsetTablePart::block_at_or_preceding Reviewed-by: kbarrett, tschatzl ! src/hotspot/share/gc/g1/g1BlockOffsetTable.cpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.hpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp Changeset: 82deab2dd59e Author: hseigel Date: 2019-06-19 13:34 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/82deab2dd59e 8226304: Obsolete the -XX:+FailOverToOldVerifier option Summary: Change the option from deprecated to obsolete Reviewed-by: lfoltan, coleenp ! src/hotspot/share/classfile/verifier.cpp ! src/hotspot/share/runtime/arguments.cpp ! src/hotspot/share/runtime/globals.hpp Changeset: 43627549a488 Author: shurailine Date: 2019-06-19 05:04 -0800 URL: https://hg.openjdk.java.net/zgc/zgc/rev/43627549a488 8226359: Switch to JCov build which supports byte code version 58 Reviewed-by: erikj ! make/conf/jib-profiles.js Changeset: afc6c25c2f4a Author: iignatyev Date: 2019-06-18 09:19 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/afc6c25c2f4a 8226313: problem list compiler/types/correctness tests Reviewed-by: thartmann ! test/hotspot/jtreg/ProblemList.txt Changeset: be453f7ee72c Author: amenkov Date: 2019-06-18 16:08 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/be453f7ee72c 8225682: Reference to JNI spec on java.sun.com Reviewed-by: gadams, cjplummer, sspitsyn ! make/data/jdwp/jdwp.spec Changeset: af38014cb097 Author: iignatyev Date: 2019-06-19 03:21 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/af38014cb097 8226360: merge entries in hotspot problem lists Reviewed-by: epavlova, kvn ! test/hotspot/jtreg/ProblemList-graal.txt ! test/jdk/ProblemList-graal.txt Changeset: 3dcfa209f769 Author: thartmann Date: 2019-06-19 12:24 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/3dcfa209f769 8226381: ProblemList java/lang/reflect/PublicMethods/PublicMethodsTest.java Summary: Put test on AOT ProblemList. Reviewed-by: iignatyev ! test/jdk/ProblemList-aot.txt Changeset: 80594c78a608 Author: thartmann Date: 2019-06-19 12:25 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/80594c78a608 8226382: ProblemList java/lang/constant/MethodTypeDescTest.java Summary: Put test on AOT ProblemList. Reviewed-by: iignatyev ! test/jdk/ProblemList-aot.txt Changeset: 360f8769d3dc Author: hseigel Date: 2019-06-19 08:42 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/360f8769d3dc 8225789: Empty method parameter type should generate ClassFormatError Summary: Check for an empty name when verifying unqualified names Reviewed-by: lfoltan, coleenp ! src/hotspot/share/classfile/classFileParser.cpp + test/hotspot/jtreg/runtime/classFileParserBug/EmptyUnqName.jasm + test/hotspot/jtreg/runtime/classFileParserBug/TestEmptyUnqName.java ! test/hotspot/jtreg/runtime/verifier/TestSigParse.java Changeset: 0692b67f5462 Author: aph Date: 2019-06-18 16:15 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/0692b67f5462 8225716: G1 GC: Undefined behaviour in G1BlockOffsetTablePart::block_at_or_preceding Reviewed-by: kbarrett, tschatzl ! src/hotspot/share/gc/g1/g1BlockOffsetTable.cpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.hpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp Changeset: 726cb89a9997 Author: jwilhelm Date: 2019-06-20 02:10 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/726cb89a9997 Merge ! src/hotspot/share/classfile/classFileParser.cpp Changeset: 48a14297c030 Author: jwilhelm Date: 2019-06-20 04:15 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/48a14297c030 Added tag jdk-14+2 for changeset 43627549a488 ! .hgtags Changeset: eaf0a8de3450 Author: tvaleev Date: 2019-06-20 03:32 +0000 URL: https://hg.openjdk.java.net/zgc/zgc/rev/eaf0a8de3450 8226286: Remove unused method java.lang.Integer::formatUnsignedInt and cleanup Integer/Long classes Reviewed-by: bpb, redestad ! src/java.base/share/classes/java/lang/Integer.java ! src/java.base/share/classes/java/lang/Long.java Changeset: 12e8433e2581 Author: coffeys Date: 2019-06-20 08:02 +0000 URL: https://hg.openjdk.java.net/zgc/zgc/rev/12e8433e2581 8213561: ZipFile/MultiThreadedReadTest.java timed out in tier1 Reviewed-by: lancea ! test/jdk/java/util/zip/ZipFile/MultiThreadedReadTest.java Changeset: 99b604ec1af6 Author: gadams Date: 2019-06-20 07:13 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/99b604ec1af6 8224642: Test sun/tools/jcmd/TestJcmdSanity.java fails: Bad file descriptor Reviewed-by: cjplummer, rschmelter, clanger ! src/jdk.attach/linux/classes/sun/tools/attach/VirtualMachineImpl.java Changeset: 6a7d6b6bbd78 Author: zgu Date: 2019-06-20 10:12 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/6a7d6b6bbd78 8226413: Shenandoah: Separate root scanner for SH::object_iterate() Reviewed-by: rkennke ! src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.cpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.hpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.inline.hpp Changeset: a7b9d6d4940e Author: erikj Date: 2019-06-20 09:35 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/a7b9d6d4940e 8226521: Detect WSL2 as WSL in configure Reviewed-by: erikj Contributed-by: andrewluotechnologies at outlook.com ! make/autoconf/build-aux/config.guess Changeset: 1aae575eb1ef Author: naoto Date: 2019-06-20 11:21 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/1aae575eb1ef 8220229: Timezone pattern "OOOO" does not result in the full "GMT+00:00" substring Reviewed-by: lancea, rriggs ! src/java.base/share/classes/java/time/format/DateTimeFormatter.java Changeset: 95794e32352e Author: dlsmith Date: 2019-06-20 14:03 -0600 URL: https://hg.openjdk.java.net/zgc/zgc/rev/95794e32352e 8226325: Support building of filtered spec bundles Reviewed-by: erikj ! doc/building.md ! make/Docs.gmk ! make/InitSupport.gmk ! make/Main.gmk ! make/data/docs-resources/resources/jdk-default.css Changeset: 0f141453b9e0 Author: lancea Date: 2019-06-20 16:15 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/0f141453b9e0 8226518: Typo in the ConnectionBuilder javadoc examples Reviewed-by: bpb ! src/java.sql/share/classes/java/sql/ConnectionBuilder.java ! src/java.sql/share/classes/javax/sql/PooledConnectionBuilder.java ! src/java.sql/share/classes/javax/sql/XAConnectionBuilder.java Changeset: d3030613fab9 Author: robm Date: 2019-06-20 20:20 +0000 URL: https://hg.openjdk.java.net/zgc/zgc/rev/d3030613fab9 8223727: com/sun/jndi/ldap/privconn/RunTest.java failed due to hang in LdapRequest.getReplyBer Reviewed-by: prappo ! src/java.naming/share/classes/com/sun/jndi/ldap/Connection.java Changeset: 79a7fc6c9bc7 Author: zgu Date: 2019-06-20 18:29 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/79a7fc6c9bc7 8225590: Shenandoah: Refactor ShenandoahClassLoaderDataRoots API Reviewed-by: rkennke ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.cpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.hpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.inline.hpp Changeset: e27ae3706392 Author: jwilhelm Date: 2019-06-20 04:08 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/e27ae3706392 Added tag jdk-13+26 for changeset 0692b67f5462 ! .hgtags Changeset: 1170b6d92d1c Author: xuelei Date: 2019-06-19 21:49 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/1170b6d92d1c 8225766: Curve in certificate should not affect signature scheme when using TLSv1.3 Reviewed-by: ascarpino ! src/java.base/share/classes/sun/security/ssl/SignatureScheme.java ! src/java.base/share/classes/sun/security/ssl/X509Authentication.java + test/jdk/sun/security/ssl/SignatureScheme/Tls13NamedGroups.java Changeset: 65916ade7fa2 Author: erikj Date: 2019-06-20 07:56 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/65916ade7fa2 8226404: bootcycle build uses wrong CDS archive Reviewed-by: iklam ! make/autoconf/bootcycle-spec.gmk.in Changeset: 8892555795cd Author: kvn Date: 2019-06-20 10:32 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/8892555795cd 8223794: applications/kitchensink/Kitchensink.java crash bad oop with Graal Summary: added new nmethod::oop_at_phantom() method for JVMCI to notify GC that oop should be kept alive Reviewed-by: dlong, eosterlund ! src/hotspot/share/code/nmethod.cpp ! src/hotspot/share/code/nmethod.hpp ! src/hotspot/share/jvmci/jvmciCompilerToVM.cpp ! src/hotspot/share/jvmci/jvmciRuntime.cpp ! src/hotspot/share/jvmci/jvmciRuntime.hpp Changeset: 76647c08ce0c Author: epavlova Date: 2019-06-20 11:42 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/76647c08ce0c 8225684: [AOT] vmTestbase/vm/oom/production/AlwaysOOMProduction tests fail with AOTed java.base Reviewed-by: kvn + test/hotspot/jtreg/ProblemList-aot.txt Changeset: de3484367466 Author: jjg Date: 2019-06-20 14:24 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/de3484367466 8226412: Fix command-line help text for javac -target Reviewed-by: vromero ! src/jdk.compiler/share/classes/com/sun/tools/javac/resources/javac.properties Changeset: ced62a6a7bbe Author: dtitov Date: 2019-06-20 18:47 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/ced62a6a7bbe 8220175: serviceability/dcmd/framework/VMVersionTest.java fails with a timeout Reviewed-by: sspitsyn, cjplummer ! src/hotspot/os/linux/perfMemory_linux.cpp Changeset: 81ac9262e63b Author: jwilhelm Date: 2019-06-21 04:16 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/81ac9262e63b Merge ! .hgtags Changeset: 00f29fe98900 Author: coffeys Date: 2019-06-21 08:07 +0000 URL: https://hg.openjdk.java.net/zgc/zgc/rev/00f29fe98900 8133489: Better messaging for PKIX path validation matching Reviewed-by: xuelei ! src/java.base/share/classes/java/security/cert/X509CertSelector.java ! test/jdk/java/security/cert/CertPathBuilder/selfIssued/KeyUsageMatters.java Changeset: 17ba7ce18564 Author: hannesw Date: 2019-06-21 12:23 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/17ba7ce18564 8225802: Remove unused CSS classes from HTML doclet Reviewed-by: jjg ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/markup/HtmlStyle.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/resources/stylesheet.css Changeset: e764228f71dc Author: mullan Date: 2019-06-21 08:38 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/e764228f71dc 8226307: Curve names should be case-insensitive Reviewed-by: igerasim, jnimeh, wetmore ! src/java.base/share/classes/sun/security/util/CurveDB.java ! test/jdk/java/security/KeyAgreement/KeyAgreementTest.java Changeset: 6dfdcd31463d Author: kvn Date: 2019-06-21 13:04 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/6dfdcd31463d 8185139: [Graal] Tests which set too restrictive security manager fail with Graal Summary: tests should also check default policy Reviewed-by: mchung, dfuchs, alanb, mullan ! test/jdk/ProblemList-graal.txt ! test/jdk/java/lang/Class/getDeclaredField/ClassDeclaredFieldsTest.java ! test/jdk/java/lang/Class/getDeclaredField/FieldSetAccessibleTest.java ! test/jdk/java/lang/ProcessBuilder/Basic.java ! test/jdk/java/lang/ProcessBuilder/SecurityManagerClinit.java ! test/jdk/java/lang/ProcessHandle/PermissionTest.java ! test/jdk/java/lang/System/Logger/custom/CustomLoggerTest.java ! test/jdk/java/lang/System/Logger/default/DefaultLoggerTest.java ! test/jdk/java/lang/System/LoggerFinder/BaseLoggerFinderTest/BaseLoggerFinderTest.java ! test/jdk/java/lang/System/LoggerFinder/DefaultLoggerFinderTest/DefaultLoggerFinderTest.java ! test/jdk/java/lang/System/LoggerFinder/internal/BaseLoggerBridgeTest/BaseLoggerBridgeTest.java ! test/jdk/java/lang/System/LoggerFinder/internal/BasePlatformLoggerTest/BasePlatformLoggerTest.java ! test/jdk/java/lang/System/LoggerFinder/internal/LoggerBridgeTest/LoggerBridgeTest.java ! test/jdk/java/lang/System/LoggerFinder/internal/PlatformLoggerBridgeTest/PlatformLoggerBridgeTest.java ! test/jdk/java/lang/System/LoggerFinder/jdk/DefaultLoggerBridgeTest/DefaultLoggerBridgeTest.java ! test/jdk/java/lang/System/LoggerFinder/jdk/DefaultPlatformLoggerTest/DefaultPlatformLoggerTest.java ! test/jdk/java/lang/invoke/InvokeDynamicPrintArgs.java ! test/jdk/java/lang/invoke/MethodHandleConstants.java ! test/jdk/java/security/Policy/Dynamic/DynamicPolicy.java ! test/jdk/java/util/concurrent/Executors/PrivilegedCallables.java ! test/jdk/java/util/logging/FileHandlerLongLimit.java ! test/jdk/java/util/logging/FileHandlerPath.java ! test/jdk/java/util/logging/FileHandlerPatternExceptions.java ! test/jdk/java/util/logging/LogManager/Configuration/ParentLoggerWithHandlerGC.java ! test/jdk/java/util/logging/LogManager/Configuration/updateConfiguration/HandlersOnComplexResetUpdate.java ! test/jdk/java/util/logging/LogManager/Configuration/updateConfiguration/HandlersOnComplexUpdate.java ! test/jdk/java/util/logging/LogManager/Configuration/updateConfiguration/SimpleUpdateConfigurationTest.java ! test/jdk/java/util/logging/LogManager/RootLogger/setLevel/TestRootLoggerLevel.java ! test/jdk/java/util/logging/LogManagerAppContextDeadlock.java ! test/jdk/java/util/logging/RootLogger/RootLevelInConfigFile.java ! test/jdk/java/util/logging/TestAppletLoggerContext.java ! test/jdk/java/util/logging/TestConfigurationListeners.java Changeset: 31bf7b93df5d Author: kvn Date: 2019-06-21 16:21 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/31bf7b93df5d 8225810: Update JVMCI Reviewed-by: never, dnsimon ! src/hotspot/share/jvmci/jvmciCompilerToVM.cpp ! src/hotspot/share/jvmci/jvmciEnv.cpp ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/CompilerToVM.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotCompilationRequest.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotJVMCIRuntime.java + src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotObjectConstantScope.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaType.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedObjectTypeImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedPrimitiveType.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/IndirectHotSpotObjectConstantImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/SharedLibraryJVMCIReflection.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.meta/src/jdk/vm/ci/meta/MetaUtil.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestMetaAccessProvider.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaType.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TypeUniverse.java Changeset: a3e3f3caf284 Author: sspitsyn Date: 2019-06-20 23:12 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/a3e3f3caf284 8223736: jvmti/scenarios/contention/TC04/tc04t001/TestDescription.java fails due to wrong number of MonitorContendedEntered events Summary: Fix the synchronization issue in the test Reviewed-by: cjplummer, dcubed, amenkov ! test/hotspot/jtreg/vmTestbase/nsk/jvmti/scenarios/contention/TC04/tc04t001.java ! test/hotspot/jtreg/vmTestbase/nsk/jvmti/scenarios/contention/TC04/tc04t001/tc04t001.cpp Changeset: 68ef70c9a921 Author: erikj Date: 2019-06-21 06:33 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/68ef70c9a921 8226538: find-files.gmk gets corrupted if tab completion is used before running make first Reviewed-by: tbell ! make/common/FindTests.gmk ! test/make/TestMake.gmk Changeset: a6411f1e63f3 Author: adinn Date: 2019-06-21 15:16 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/a6411f1e63f3 8226203: MappedByteBuffer.force method may have no effect on implementation specific map modes Summary: Fix comment for MappedByteBuffer force methods Reviewed-by: alanb ! src/java.base/share/classes/java/nio/MappedByteBuffer.java Changeset: e9d4e0a9c8c7 Author: coleenp Date: 2019-06-21 09:53 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/e9d4e0a9c8c7 8226394: [TESTBUG] vmTestbase/metaspace/flags/maxMetaspaceSize/TestDescription.java fails with java.lang.NoClassDefFoundError Summary: don't use printStackTrace to verify OOM type. Reviewed-by: lfoltan, dholmes ! test/hotspot/jtreg/vmTestbase/nsk/share/gc/gp/GarbageUtils.java Changeset: 076f34b82b98 Author: weijun Date: 2019-06-21 23:44 +0800 URL: https://hg.openjdk.java.net/zgc/zgc/rev/076f34b82b98 8225257: sun/security/tools/keytool/PSS.java timed out Reviewed-by: valeriep - test/jdk/sun/security/tools/keytool/PSS.java + test/jdk/sun/security/tools/keytool/pss/PSS.java + test/jdk/sun/security/tools/keytool/pss/java.base/sun/security/rsa/RSAKeyPairGenerator.java Changeset: e00591da418d Author: erikj Date: 2019-06-21 10:38 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/e00591da418d 8226269: Race in SetupProcessMarkdown Reviewed-by: tbell ! make/common/ProcessMarkdown.gmk Changeset: 97c75e545302 Author: jjg Date: 2019-06-21 11:41 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/97c75e545302 8226592: Fix HTML in table for jdk.zipfs module-info Reviewed-by: bpb, lancea ! src/jdk.zipfs/share/classes/module-info.java Changeset: 179204eb9444 Author: jjg Date: 2019-06-21 12:09 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/179204eb9444 8226593: Fix HTML in com/sun/jdi/doc-files/signature.html Reviewed-by: sspitsyn, lancea ! src/jdk.jdi/share/classes/com/sun/jdi/doc-files/signature.html Changeset: 72bbc930d7b6 Author: jwilhelm Date: 2019-06-22 02:03 +0200 URL: https://hg.openjdk.java.net/zgc/zgc/rev/72bbc930d7b6 Merge - test/jdk/sun/security/tools/keytool/PSS.java Changeset: c9e362aef472 Author: zgu Date: 2019-06-24 09:51 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/c9e362aef472 8226586: Shenandoah: No need to pre-evacuate roots for degenerated GC Reviewed-by: rkennke ! src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp Changeset: 73250862f818 Author: michaelm Date: 2019-06-24 15:10 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/73250862f818 8219804: java/net/MulticastSocket/Promiscuous.java fails intermittently due to NumberFormatException Reviewed-by: chegar, dfuchs ! test/jdk/java/net/MulticastSocket/Promiscuous.java Changeset: 6ca3526c4e25 Author: michaelm Date: 2019-06-24 15:19 +0100 URL: https://hg.openjdk.java.net/zgc/zgc/rev/6ca3526c4e25 8226683: Remove review suggestion from fix to 8219804 Reviewed-by: chegar ! test/jdk/java/net/MulticastSocket/Promiscuous.java Changeset: aee0d296c0ef Author: zgu Date: 2019-06-24 11:46 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/aee0d296c0ef 8226311: Shenandoah: Concurrent evacuation of OopStorage backed weak roots Reviewed-by: rkennke ! src/hotspot/share/gc/shenandoah/shenandoahConcurrentMark.cpp ! src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.cpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.hpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.inline.hpp ! src/hotspot/share/gc/shenandoah/shenandoahRootVerifier.cpp ! src/hotspot/share/gc/shenandoah/shenandoahRootVerifier.hpp Changeset: c396e381cfa4 Author: zgu Date: 2019-06-24 14:13 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/c396e381cfa4 8226310: Shenandoah: Concurrent evacuation of CLDG Reviewed-by: rkennke ! src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.cpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.hpp ! src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.inline.hpp Changeset: ae2e53e379cb Author: coleenp Date: 2019-06-24 16:51 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/ae2e53e379cb 8214822: Move ConcurrentHashTable VALUE parameter to CONFIG Summary: make VALUE parameter be included in CONFIG configuration, also remove BaseConfig Reviewed-by: dholmes, kbarrett ! src/hotspot/share/classfile/stringTable.cpp ! src/hotspot/share/classfile/symbolTable.cpp ! src/hotspot/share/prims/resolvedMethodTable.cpp ! src/hotspot/share/utilities/concurrentHashTable.hpp ! src/hotspot/share/utilities/concurrentHashTable.inline.hpp ! src/hotspot/share/utilities/concurrentHashTableTasks.inline.hpp ! test/hotspot/gtest/utilities/test_concurrentHashtable.cpp Changeset: 80b27dc96ca3 Author: dcubed Date: 2019-06-24 22:38 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/80b27dc96ca3 8226699: [BACKOUT] JDK-8221734 Deoptimize with handshakes Reviewed-by: dholmes, rehn, dlong ! src/hotspot/share/aot/aotCodeHeap.cpp ! src/hotspot/share/aot/aotCompiledMethod.cpp ! src/hotspot/share/aot/aotCompiledMethod.hpp ! src/hotspot/share/code/codeCache.cpp ! src/hotspot/share/code/compiledMethod.hpp ! src/hotspot/share/code/nmethod.cpp ! src/hotspot/share/code/nmethod.hpp ! src/hotspot/share/gc/z/zBarrierSetNMethod.cpp ! src/hotspot/share/gc/z/zNMethod.cpp ! src/hotspot/share/jvmci/jvmciEnv.cpp ! src/hotspot/share/oops/method.cpp ! src/hotspot/share/oops/method.hpp ! src/hotspot/share/prims/jvmtiEventController.cpp ! src/hotspot/share/prims/methodHandles.cpp ! src/hotspot/share/prims/whitebox.cpp ! src/hotspot/share/runtime/biasedLocking.cpp ! src/hotspot/share/runtime/biasedLocking.hpp ! src/hotspot/share/runtime/deoptimization.cpp ! src/hotspot/share/runtime/deoptimization.hpp ! src/hotspot/share/runtime/mutex.hpp ! src/hotspot/share/runtime/mutexLocker.cpp ! src/hotspot/share/runtime/mutexLocker.hpp ! src/hotspot/share/runtime/thread.cpp ! src/hotspot/share/runtime/thread.hpp ! src/hotspot/share/runtime/vmOperations.cpp ! src/hotspot/share/runtime/vmOperations.hpp ! src/hotspot/share/services/dtraceAttacher.cpp - test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java Changeset: f1e5ddb814b7 Author: serb Date: 2019-06-21 16:20 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/f1e5ddb814b7 8225146: Accessibility issues in javax/swing/plaf/nimbus/doc-files/properties.html Reviewed-by: aivanov ! src/java.desktop/share/classes/javax/swing/plaf/nimbus/doc-files/properties.html Changeset: e17c9a93b505 Author: sspitsyn Date: 2019-06-21 18:20 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/e17c9a93b505 8224555: vmTestbase/nsk/jvmti/scenarios/contention/TC02/tc02t001/TestDescription.java failed Summary: Improve synchronization in the test Reviewed-by: dcubed, amenkov ! test/hotspot/jtreg/vmTestbase/nsk/jvmti/scenarios/contention/TC02/tc02t001.java ! test/hotspot/jtreg/vmTestbase/nsk/jvmti/scenarios/contention/TC02/tc02t001/tc02t001.cpp Changeset: 4d5eabe8d341 Author: sspitsyn Date: 2019-06-22 14:35 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/4d5eabe8d341 8226595: jvmti/scenarios/contention/TC04/tc04t001/TestDescription.java still fails due to wrong number of MonitorContendedEntered events Summary: Fix one more sync issue in the test Reviewed-by: dcubed, amenkov ! test/hotspot/jtreg/vmTestbase/nsk/jvmti/scenarios/contention/TC04/tc04t001.java Changeset: 00c08fae63e8 Author: mullan Date: 2019-06-24 10:11 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/00c08fae63e8 8180005: Provide specific links in KeyManagerFactory and TrustManagerFactory to the Standard Algorithm Names Specification Reviewed-by: ascarpino ! src/java.base/share/classes/javax/net/ssl/KeyManagerFactory.java ! src/java.base/share/classes/javax/net/ssl/TrustManagerFactory.java Changeset: 1cd4d287839b Author: bobv Date: 2019-06-24 11:49 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/1cd4d287839b 8224502: [TESTBUG] JDK docker test TestSystemMetrics.java fails with access issues and OOM Reviewed-by: sgehwolf, mseledtsov ! test/jdk/ProblemList.txt ! test/jdk/jdk/internal/platform/docker/TestSystemMetrics.java ! test/lib/jdk/test/lib/containers/cgroup/MetricsTester.java Changeset: 1e4bbd6fbb2f Author: bobv Date: 2019-06-24 11:52 -0400 URL: https://hg.openjdk.java.net/zgc/zgc/rev/1e4bbd6fbb2f 8224506: [TESTBUG] TestDockerMemoryMetrics.java fails with exitValue = 137 Reviewed-by: sgehwolf, mseledtsov ! test/jdk/ProblemList.txt ! test/jdk/jdk/internal/platform/docker/TestDockerMemoryMetrics.java Changeset: fe6c2f0b42be Author: jjg Date: 2019-06-24 13:40 -0700 URL: https://hg.openjdk.java.net/zgc/zgc/rev/fe6c2f0b42be 8226628: The copyright footer should be enclosed in