From Alan.Bateman at oracle.com Fri Nov 1 06:50:53 2019 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Fri, 1 Nov 2019 06:50:53 +0000 Subject: RFR 8233091 : Backout JDK-8212117: Class.forName loads a class but not linked if class is not initialized In-Reply-To: <6643000b-9bd2-5b3a-8321-9fbc716a23f3@oracle.com> References: <6643000b-9bd2-5b3a-8321-9fbc716a23f3@oracle.com> Message-ID: On 31/10/2019 22:50, Brent Christian wrote: > Hi, > > Please review my change to backout JDK-8212117: > http://cr.openjdk.java.net/~bchristi/8233091/webrev-revert-01/ The backout looks right. -Alan From lutz.schmidt at sap.com Fri Nov 1 14:10:41 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Fri, 1 Nov 2019 14:10:41 +0000 Subject: 8233078 : fix minimal VM build on Linux ppc64(le) Message-ID: <1D213519-CD1A-4623-8D62-66D58F841D8D@sap.com> Hi Matthias, your changes look good to me. Please note, however, that I'm not a Reviewer. One minor thing: do you really want to keep the old code (as comment) in sharedRuntime_ppc.cpp:2853? Thanks for putting this straight. Regards, Lutz ?On 29.10.19, 14:32, "hotspot-dev on behalf of Baesken, Matthias" wrote: Thanks . May I have a second review please ? Best regards, Matthias From: Doerr, Martin Sent: Dienstag, 29. Oktober 2019 13:48 To: Baesken, Matthias ; 'hotspot-dev at openjdk.java.net' Cc: 'build-dev at openjdk.java.net' Subject: RE: RFR: 8233078 : fix minimal VM build on Linux ppc64(le) Hi Matthias, > Not sure if there are any plans to support OptimizeFill on ppc64 ? This question is not related to this issue. Commenting out parts of it is not a good style. Thank you for your update. The new webrev looks good to me. Best regards, Martin From: Baesken, Matthias > Sent: Dienstag, 29. Oktober 2019 13:25 To: Doerr, Martin >; 'hotspot-dev at openjdk.java.net' > Cc: 'build-dev at openjdk.java.net' > Subject: RE: RFR: 8233078 : fix minimal VM build on Linux ppc64(le) Hi Martin, thanks for the input . I did the adjustments you suggested; new webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8233078.1/ Regarding : stubGenerator_ppc.cpp: "Code should better be protected by #ifdef COMPILER2 than commenting out." Currently the if (OptimizeFill) { ... } coding is dead on ppc . See : c2_globals.hpp ------------------------ 234 /* OptimizeFill not yet supported on PowerPC. */ \ 235 product(bool, OptimizeFill, true PPC64_ONLY(&& false), \ c2_init_ppc.cpp ------------------------ 53 if (OptimizeFill) { 54 warning("OptimizeFill is not supported on this CPU."); 55 FLAG_SET_DEFAULT(OptimizeFill, false); Not sure if there are any plans to support OptimizeFill on ppc64 ? Best regards, Matthias Hi Matthias, thanks for fixing it. I have a few requests: disassembler_ppc.cpp: Please remove includes completely if no longer needed (instead of commenting out). sharedRuntime_ppc.cpp: I think it's better to remove the 2 align(InteriorEntryAlignment). Succeeding code is not performance critical. stubGenerator_ppc.cpp: Code should better be protected by #ifdef COMPILER2 than commenting out. Otherwise, looks good to me. Thanks, Martin From: Baesken, Matthias > Sent: Dienstag, 29. Oktober 2019 12:42 To: 'hotspot-dev at openjdk.java.net' > Cc: 'build-dev at openjdk.java.net' >; Doerr, Martin > Subject: RFR: 8233078 : fix minimal VM build on Linux ppc64(le) Hello, please review the following fix . I recently experimented a bit with the minimal VM build on linux x86_64 (--with-jvm-features=minimal --with-jvm-variants=minimal) . This worked fine . However when I tried the minimal vm build on linux ppc64 / ppc64le , I noticed that it fails because of a few wrong dependencies . Thanks to Martin for the advice regarding Register ic = as_Register(Matcher::inline_cache_reg_encode()); Replacement with Register ic = R19_inline_cache_reg; In http://cr.openjdk.java.net/~mbaesken/webrevs/8233078.0/src/hotspot/cpu/ppc/sharedRuntime_ppc.cpp.frames.html Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8233078 http://cr.openjdk.java.net/~mbaesken/webrevs/8233078.0/ Thanks, Matthias From kim.barrett at oracle.com Sat Nov 2 00:07:11 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 1 Nov 2019 20:07:11 -0400 Subject: RFR: 8233364: Fix undefined behavior in Canonicalizer::do_ShiftOp Message-ID: Please review this change to Canonicalizer::do_ShiftOp to eliminate several actual or potential invocations of undefined behavior involving shift operations. (See CR for details.) To support this fix, a set of java_shift_xxx functions are added to globalDefinitions.hpp. These use the same implementation techniques used by java_add and friends to perform the corresponding operation with Java semantics for handling overflows and such. With these new java_shift_xxx functions available, the constant folding by do_ShiftOp is now trivially implemented by calls to those functions. Added gtest-based unit tests covering the new shift functions. Also added unit tests for java_add and friends, which should have been part of their addition by JDK-8145096. (Oops!) CR: https://bugs.openjdk.java.net/browse/JDK-8233364 Webrev: https://cr.openjdk.java.net/~kbarrett/8233364/open.00/ Testing: mach5 tier1-3. From aph at redhat.com Sat Nov 2 09:53:43 2019 From: aph at redhat.com (Andrew Haley) Date: Sat, 2 Nov 2019 09:53:43 +0000 Subject: RFR: 8233364: Fix undefined behavior in Canonicalizer::do_ShiftOp In-Reply-To: References: Message-ID: On 11/2/19 12:07 AM, Kim Barrett wrote: > CR: > https://bugs.openjdk.java.net/browse/JDK-8233364 > > Webrev: > https://cr.openjdk.java.net/~kbarrett/8233364/open.00/ That looks right. Thanks for the cleanup. -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From kim.barrett at oracle.com Sat Nov 2 19:21:11 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Sat, 2 Nov 2019 15:21:11 -0400 Subject: RFR: 8233364: Fix undefined behavior in Canonicalizer::do_ShiftOp In-Reply-To: References: Message-ID: <9DB96CE6-7ABB-4242-99F8-1F6513A389BD@oracle.com> > On Nov 2, 2019, at 5:53 AM, Andrew Haley wrote: > > On 11/2/19 12:07 AM, Kim Barrett wrote: >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8233364 >> >> Webrev: >> https://cr.openjdk.java.net/~kbarrett/8233364/open.00/ > > That looks right. Thanks for the cleanup. Thanks. From matthias.baesken at sap.com Mon Nov 4 08:48:55 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 4 Nov 2019 08:48:55 +0000 Subject: 8233078 : fix minimal VM build on Linux ppc64(le) In-Reply-To: <1D213519-CD1A-4623-8D62-66D58F841D8D@sap.com> References: <1D213519-CD1A-4623-8D62-66D58F841D8D@sap.com> Message-ID: Thanks for the reviews ! > One minor thing: do you really want to keep the old code (as comment) in > sharedRuntime_ppc.cpp:2853? I remove it . Best regards, Matthias > -----Original Message----- > From: Schmidt, Lutz > Sent: Freitag, 1. November 2019 15:11 > To: Baesken, Matthias ; Doerr, Martin > ; 'hotspot-dev at openjdk.java.net' dev at openjdk.java.net> > Cc: 'build-dev at openjdk.java.net' > Subject: Re: 8233078 : fix minimal VM build on Linux ppc64(le) > > Hi Matthias, > > your changes look good to me. Please note, however, that I'm not a > Reviewer. > > One minor thing: do you really want to keep the old code (as comment) in > sharedRuntime_ppc.cpp:2853? > > Thanks for putting this straight. > > Regards, > Lutz > > ?On 29.10.19, 14:32, "hotspot-dev on behalf of Baesken, Matthias" dev-bounces at openjdk.java.net on behalf of matthias.baesken at sap.com> > wrote: > > Thanks . > May I have a second review please ? > > Best regards, Matthias > > From: Doerr, Martin > Sent: Dienstag, 29. Oktober 2019 13:48 > To: Baesken, Matthias ; 'hotspot- > dev at openjdk.java.net' > Cc: 'build-dev at openjdk.java.net' > Subject: RE: RFR: 8233078 : fix minimal VM build on Linux ppc64(le) > > Hi Matthias, > > > Not sure if there are any plans to support OptimizeFill on ppc64 ? > This question is not related to this issue. > Commenting out parts of it is not a good style. > > Thank you for your update. The new webrev looks good to me. > > Best regards, > Martin > > > From: Baesken, Matthias > > > Sent: Dienstag, 29. Oktober 2019 13:25 > To: Doerr, Martin > >; 'hotspot- > dev at openjdk.java.net' dev at openjdk.java.net>> > Cc: 'build-dev at openjdk.java.net' dev at openjdk.java.net> > Subject: RE: RFR: 8233078 : fix minimal VM build on Linux ppc64(le) > > Hi Martin, thanks for the input . > I did the adjustments you suggested; new webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8233078.1/ > > Regarding : stubGenerator_ppc.cpp: "Code should better be protected by > #ifdef COMPILER2 than commenting out." > Currently the if (OptimizeFill) { ... } coding is dead on ppc . > > See : > c2_globals.hpp > ------------------------ > 234 /* OptimizeFill not yet supported on PowerPC. */ \ > 235 product(bool, OptimizeFill, true PPC64_ONLY(&& false), \ > > c2_init_ppc.cpp > ------------------------ > 53 if (OptimizeFill) { > 54 warning("OptimizeFill is not supported on this CPU."); > 55 FLAG_SET_DEFAULT(OptimizeFill, false); > > > Not sure if there are any plans to support OptimizeFill on ppc64 ? > > Best regards, Matthias > > > > > Hi Matthias, > > thanks for fixing it. I have a few requests: > > disassembler_ppc.cpp: > Please remove includes completely if no longer needed (instead of > commenting out). > > sharedRuntime_ppc.cpp: > I think it's better to remove the 2 align(InteriorEntryAlignment). > Succeeding code is not performance critical. > > stubGenerator_ppc.cpp: > Code should better be protected by #ifdef COMPILER2 than commenting > out. > > Otherwise, looks good to me. > > Thanks, > Martin > > > From: Baesken, Matthias > > > Sent: Dienstag, 29. Oktober 2019 12:42 > To: 'hotspot-dev at openjdk.java.net' dev at openjdk.java.net> > Cc: 'build-dev at openjdk.java.net' dev at openjdk.java.net>; Doerr, > Martin > > Subject: RFR: 8233078 : fix minimal VM build on Linux ppc64(le) > > Hello, please review the following fix . > I recently experimented a bit with the minimal VM build on linux x86_64 > (--with-jvm-features=minimal --with-jvm-variants=minimal) . > This worked fine . > > However when I tried the minimal vm build on linux ppc64 / ppc64le , I > noticed that it fails because of a few wrong dependencies . > Thanks to Martin for the advice regarding > > Register ic = as_Register(Matcher::inline_cache_reg_encode()); > > Replacement with > > > Register ic = R19_inline_cache_reg; > > In > http://cr.openjdk.java.net/~mbaesken/webrevs/8233078.0/src/hotspot/cpu > /ppc/sharedRuntime_ppc.cpp.frames.html > > > Bug/webrev : > > https://bugs.openjdk.java.net/browse/JDK-8233078 > http://cr.openjdk.java.net/~mbaesken/webrevs/8233078.0/ > > > Thanks, Matthias > > > > From matthias.baesken at sap.com Mon Nov 4 11:27:22 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 4 Nov 2019 11:27:22 +0000 Subject: RFR: 8233328: fix minimal VM build on Linux s390x Message-ID: Hello, please review the following change that fixes the "minimal VM" build on linuxs390x . While looking into the issue I noticed that MacroAssembler::generate_type_profiling ( in http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/src/hotspot/cpu/s390/macroAssembler_s390.cpp.frames.html ) Is ununsed so I removed it (probably it is a left over from old jdk8 (?) times ) . In 2929 void MacroAssembler::nmethod_UEP(Label& ic_miss) { 2930 #ifdef COMPILER2 2931 Register ic_reg = as_Register(Matcher::inline_cache_reg_encode()); 2932 #else 2933 Register ic_reg = as_Register(0); 2934 #endif We probably still need to replace as_Register(0); with something better , any suggestions ? Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8233328 http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/ Thanks, Matthias From jai.forums2013 at gmail.com Mon Nov 4 14:41:09 2019 From: jai.forums2013 at gmail.com (Jaikiran Pai) Date: Mon, 4 Nov 2019 20:11:09 +0530 Subject: ClhsdbCDSCore jtreg test fails on OSX Message-ID: <15b4b0dc-6ad0-7a3c-643b-b121766ff1db@gmail.com> Not sure if this is the right place to report this, but given that the failing test is in test/hotspot, I thought best to ask here. I was working on a unrelated fix and happened to run tier1 tests. All went fine, except the test/hotspot/jtreg/serviceability/sa/ClhsdbCDSCore.java failed with: java.lang.Error: cores is not a directory or does not have write permissions ??? at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) ??? at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Looking at the testcase itself, I see this http://hg.openjdk.java.net/jdk/jdk/file/6f98d0173a72/test/hotspot/jtreg/serviceability/sa/ClhsdbCDSCore.java#l112 if (Platform.isOSX()) { ??? File coresDir = new File("/cores"); ??? if (!coresDir.isDirectory() || !coresDir.canWrite()) { ??????? throw new Error("cores is not a directory or does not have write permissions"); I'm on OSX. So this test expects a directory called "cores" at the root of the filesystem? That looks odd. I don't have any such directory. Is this an issue in the test case or am I missing some configuration to have this "core" dump generated at some other location while running this test? -Jaikiran From jai.forums2013 at gmail.com Mon Nov 4 14:49:12 2019 From: jai.forums2013 at gmail.com (Jaikiran Pai) Date: Mon, 4 Nov 2019 20:19:12 +0530 Subject: ClhsdbCDSCore jtreg test fails on OSX In-Reply-To: <15b4b0dc-6ad0-7a3c-643b-b121766ff1db@gmail.com> References: <15b4b0dc-6ad0-7a3c-643b-b121766ff1db@gmail.com> Message-ID: On 04/11/19 8:11 PM, Jaikiran Pai wrote: > ... > Looking at the testcase itself, I see this > http://hg.openjdk.java.net/jdk/jdk/file/6f98d0173a72/test/hotspot/jtreg/serviceability/sa/ClhsdbCDSCore.java#l112 > > if (Platform.isOSX()) { > > ??? File coresDir = new File("/cores"); > > ??? if (!coresDir.isDirectory() || !coresDir.canWrite()) { > > ??????? throw new Error("cores is not a directory or does not have write > permissions"); > > > I'm on OSX. So this test expects a directory called "cores" at the root > of the filesystem? That looks odd. I don't have any such directory. Correction - I do have that directory (my "ls" command that I previously used to check had a typo), but that /cores directory is owned by "root" and the test is running as a regular user. -Jaikiran From daniel.daugherty at oracle.com Mon Nov 4 16:40:22 2019 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Mon, 4 Nov 2019 11:40:22 -0500 Subject: ClhsdbCDSCore jtreg test fails on OSX In-Reply-To: References: <15b4b0dc-6ad0-7a3c-643b-b121766ff1db@gmail.com> Message-ID: <85cf1260-fdcb-c0e9-4d05-b411b97de7f3@oracle.com> Moving this thread over to serviceability-dev at ... since this question is about Serviceability Agent tests... Bcc'ing hotspot-dev at ... so folks know that the thread moved... On 11/4/19 9:49 AM, Jaikiran Pai wrote: > On 04/11/19 8:11 PM, Jaikiran Pai wrote: >> ... >> Looking at the testcase itself, I see this >> http://hg.openjdk.java.net/jdk/jdk/file/6f98d0173a72/test/hotspot/jtreg/serviceability/sa/ClhsdbCDSCore.java#l112 >> >> if (Platform.isOSX()) { >> >> ??? File coresDir = new File("/cores"); >> >> ??? if (!coresDir.isDirectory() || !coresDir.canWrite()) { >> >> ??????? throw new Error("cores is not a directory or does not have write >> permissions"); >> >> >> I'm on OSX. So this test expects a directory called "cores" at the root >> of the filesystem? That looks odd. I don't have any such directory. > Correction - I do have that directory (my "ls" command that I previously > used to check had a typo), but that /cores directory is owned by "root" > and the test is running as a regular user. > > -Jaikiran $ ls -ld /cores drwxrwxr-t? 2 root? admin? 64 Nov? 4 09:22 /cores/ so the directory on my macOSX machine is writable by group 'admin' and my login happens to belong to group 'admin'. Dan From matthias.baesken at sap.com Tue Nov 5 08:54:27 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Tue, 5 Nov 2019 08:54:27 +0000 Subject: RFR: 8233328: fix minimal VM build on Linux s390x Message-ID: Hello, here is another webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.2/ It adjusts the coding in macroAssembler_s390.cpp mentioned below . Best regards, Matthias From: Baesken, Matthias Sent: Montag, 4. November 2019 12:27 To: 'hotspot-dev at openjdk.java.net' Subject: RFR: 8233328: fix minimal VM build on Linux s390x Hello, please review the following change that fixes the "minimal VM" build on linuxs390x . While looking into the issue I noticed that MacroAssembler::generate_type_profiling ( in http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/src/hotspot/cpu/s390/macroAssembler_s390.cpp.frames.html ) Is ununsed so I removed it (probably it is a left over from old jdk8 (?) times ) . In 2929 void MacroAssembler::nmethod_UEP(Label& ic_miss) { 2930 #ifdef COMPILER2 2931 Register ic_reg = as_Register(Matcher::inline_cache_reg_encode()); 2932 #else 2933 Register ic_reg = as_Register(0); 2934 #endif We probably still need to replace as_Register(0); with something better , any suggestions ? Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8233328 http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/ Thanks, Matthias From jan.lahoda at oracle.com Tue Nov 5 09:50:21 2019 From: jan.lahoda at oracle.com (Jan Lahoda) Date: Tue, 5 Nov 2019 10:50:21 +0100 Subject: RFR: JDK-8232684: Make switch expressions final In-Reply-To: References: Message-ID: I've missed updates to some hotspot and jdk tests in the first patch. The problem are unqualified invocations of Thread.yield(), which are no longer allowed (from the spec: JLS 3.9 Keywords: "All invocations of a method called yield must be qualified so as to be distinguished from a yield statement."). Therefore, these invocations need to be changed to qualified invocations (i.e. Thread.yield()). Updates to the hotspot and jdk tests are here: http://cr.openjdk.java.net/~jlahoda/8232684/webrev.delta.00.01/ Full updated webrev, with changes to both javac and the tests, is here: http://cr.openjdk.java.net/~jlahoda/8232684/webrev.01/ How does that look? Thanks, Jan On 21. 10. 19 16:17, Maurizio Cimadamore wrote: > Looks generally good -? went through the test updates one by one and > they look ok, except this: > > http://cr.openjdk.java.net/~jlahoda/8232684/webrev.00/test/langtools/tools/jdeps/listdeps/ListModuleDeps.java.udiff.html > > > Which you explained to me offline (we need to change this code every > time the compiler stops using the @Preview annotation - ugh). Nothing > specific to this webrev, so I approve. > > Maurizio > > On 21/10/2019 14:49, Jan Lahoda wrote: >> Hi, >> >> As part of preparation for proposing JEP 361: Switch Expressions >> (Standard) to target, I would like to ask for a review of the patch to >> make switch expression a non-preview feature in javac: >> http://cr.openjdk.java.net/~jlahoda/8232684/webrev.00/ >> >> The patch basically removes the feature from the list of preview >> features, updates test to this new state (removes --enable-preview >> from associated tests, and adjusts their expected output), and removes >> the @PreviewFeature annotation and associated text from the javadoc of >> the Trees API. >> >> I also would like to ask for a review for the CSR associated with that: >> https://bugs.openjdk.java.net/browse/JDK-8232685 >> >> Reviews/comments on either of these would be very welcome! >> >> JBS: https://bugs.openjdk.java.net/browse/JDK-8232684 >> >> Thanks! >> >> Jan From lutz.schmidt at sap.com Tue Nov 5 15:10:16 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Tue, 5 Nov 2019 15:10:16 +0000 Subject: 8233328: fix minimal VM build on Linux s390x Message-ID: <8CED8B50-F788-43D8-BA49-24DFA368EF18@sap.com> Hi Matthias, your change looks even better now. Please note: I?m not a Reviewer. One little thing: you could make array_equals() depend on #ifdef COMPILER2 as well. That would save some more bytes. Thanks, Lutz From: "Baesken, Matthias" Date: Tuesday, 5. November 2019 at 09:54 To: "hotspot-dev at openjdk.java.net" Cc: "Doerr, Martin (martin.doerr at sap.com)" , Lutz Schmidt Subject: RE: RFR: 8233328: fix minimal VM build on Linux s390x Hello, here is another webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.2/ It adjusts the coding in macroAssembler_s390.cpp mentioned below . Best regards, Matthias From: Baesken, Matthias Sent: Montag, 4. November 2019 12:27 To: 'hotspot-dev at openjdk.java.net' Subject: RFR: 8233328: fix minimal VM build on Linux s390x Hello, please review the following change that fixes the ?minimal VM? build on linuxs390x . While looking into the issue I noticed that MacroAssembler::generate_type_profiling ( in http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/src/hotspot/cpu/s390/macroAssembler_s390.cpp.frames.html ) Is ununsed so I removed it (probably it is a left over from old jdk8 (?) times ) . In 2929 void MacroAssembler::nmethod_UEP(Label& ic_miss) { 2930 #ifdef COMPILER2 2931 Register ic_reg = as_Register(Matcher::inline_cache_reg_encode()); 2932 #else 2933 Register ic_reg = as_Register(0); 2934 #endif We probably still need to replace as_Register(0); with something better , any suggestions ? Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8233328 http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/ Thanks, Matthias From bob.vandette at oracle.com Tue Nov 5 21:54:56 2019 From: bob.vandette at oracle.com (Bob Vandette) Date: Tue, 5 Nov 2019 16:54:56 -0500 Subject: RFR: 8230305: Cgroups v2: Container awareness In-Reply-To: <7540a208e306ab957032b18178a53c6afa105d33.camel@redhat.com> References: <072f66ee8c44034831b4e38f6470da4bff6edd07.camel@redhat.com> <7540a208e306ab957032b18178a53c6afa105d33.camel@redhat.com> Message-ID: Severin, Thanks for taking on this cgroup v2 improvement. In general I like the implementation and the refactoring. The CachedMetric class is nice. We can add any metric we want to cache in a more general way. Is this the latest version of the webrev? http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/03/webrev/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp.html It looks like you need to add the caching support for active_processor_count (JDK-8227006). I?m not sure it?s worth providing different strings for Unlimited versus Max or Scaled shares. I?d just try to be compatible with the cgroupv2 output so you don?t have to change the test. I wonder if it?s worth trying to synthesize memory_max_usage_in_bytes() by keeping the highest value ever returned by the API. Are you planning on adding cgroupv2 support for jdk.internal.platform.Metrics? jdk/open/src//java.base/linux/classes/jdk/internal/platform/cgroupv2/Metrics.java This is needed for -XshowSettings:system and will eventually be needed for a new Container MXBean. Bob. > On Oct 18, 2019, at 12:24 PM, Severin Gehwolf wrote: > > On Tue, 2019-10-15 at 11:19 +0200, Severin Gehwolf wrote: >> Hi, >> >> Please review this update to the container detection code which adds >> cgroup version 2 support. Initial review of this started in [1]. Bob >> preferred one big patch and didn't like the other refactoring in >> os_linux so this has been dropped. >> >> This new patch includes both, the refactoring to move cgroup v1 >> specific implementation out of osContainer_linux.{h,c}pp files[2] as >> well as updated detection code and the implementation for cgroups >> v2[3]. After this patch, osContainer_linux{h,c}pp files are cgroups >> version agnostic. Implementations for specific versions are in >> cgroupV{1,2}Subsystem.{c,h}pp files. Some shared, cgroup version >> agnostic code is in cgroupSubsystem.{c,h}pp. >> >> Updated detection logic looks in /proc/cgroups first for hierarchy ids >> of cpu/cpuset/cpuacct/memory controllers. If the hierarchy id for all >> of them is 0 and they're all enabled, cgroups v2, unified hierarchy is >> assumed. Otherwise it uses cgroups v1 controllers (also known as hybrid >> or legacy hierarchy). Note that controllers can be only be mounted via >> one or the other hierarchy, legacy (v1) or unified (v2) - exclusive[4]. >> >> Note that for the cgroups v2 cpu_shares() implementation a reverse >> mapping is needed for the plain value exposed via cpu.weight. >> Additionally, there doesn't seem to be an API exposed to use for an >> implementation of memory_max_usage_in_bytes() in cgroups v2. Hence, it >> returns OSCONTAINER_ERROR which is mapped to "not supported" elsewhere >> in hotspot code. >> >> Once reviewed, I intend to push this in two changesets/bugs. One for >> the refactoring work (no-op) as JDK-8230848. Another for the cgroupv2 >> impl with JDK-8230305. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8230305 > > Rebased webrev on top of JDK-8232207: > http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/03/webrev/ > > Thanks, > Severin > >> >> Testing: tier1 tests on Linux x86_64. Docker/podman hotspot tests on >> F30 with hybrid cgroup v1 hierarchy. Hotspot container tests on F31 >> with unified hierarchy on Linux x86_64. jdk/submit. All pass. >> >> Thoughts? >> >> Thanks, >> Severin >> >> [1] http://mail.openjdk.java.net/pipermail/hotspot-dev/2019-September/039605.html >> http://mail.openjdk.java.net/pipermail/hotspot-dev/2019-October/039708.html >> [2] http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8230848/04/webrev/ >> [3] http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8230305/06/webrev/ >> [4] https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html#mounting >> "All controllers which support v2 and are not bound to a v1 hierarchy are >> automatically bound to the v2 hierarchy and show up at the root. >> Controllers which are not in active use in the v2 hierarchy can be >> bound to other hierarchies." > From markus.knetschke at gmail.com Tue Nov 5 22:26:41 2019 From: markus.knetschke at gmail.com (Markus Knetschke) Date: Tue, 5 Nov 2019 23:26:41 +0100 Subject: ARM32 build broken due to JDK-8232050 In-Reply-To: References: Message-ID: Hi, I already used the web form to report the bug which resulted in JDK-8233599 The bug was closed as a duplicate but the referenced bug is a similar bug on ARM64 but my report was for an armv6l which is a ARM32 machine. The error message is: /home/mknjc/jdk/src/hotspot/cpu/arm/vtableStubs_arm.cpp:91: undefined reference to `Klass::vtable_start_offset()' This is with the current jdk master branch. The fix for this also is as simple as it gets: diff --git a/src/hotspot/cpu/arm/vtableStubs_arm.cpp b/src/hotspot/cpu/arm/vtableStubs_arm.cpp index 2c564b8189..f84e11662c 100644 --- a/src/hotspot/cpu/arm/vtableStubs_arm.cpp +++ b/src/hotspot/cpu/arm/vtableStubs_arm.cpp @@ -32,6 +32,7 @@ #include "oops/compiledICHolder.hpp" #include "oops/instanceKlass.hpp" #include "oops/klassVtable.hpp" +#include "oops/klass.inline.hpp" #include "runtime/sharedRuntime.hpp" #include "vmreg_arm.inline.hpp" #ifdef COMPILER2 So the bug isn't a duplicate and isn't fixed yet. Should I report a new bug or could JDK-8233599 opened again? Thanks Markus From david.holmes at oracle.com Tue Nov 5 23:36:04 2019 From: david.holmes at oracle.com (David Holmes) Date: Wed, 6 Nov 2019 09:36:04 +1000 Subject: ARM32 build broken due to JDK-8232050 In-Reply-To: References: Message-ID: <38650a01-27ac-a880-5e31-4c52beddb40f@oracle.com> Hi Markus, I've reopened the bug. Thanks, David On 6/11/2019 8:26 am, Markus Knetschke wrote: > Hi, > > I already used the web form to report the bug which resulted in JDK-8233599 > The bug was closed as a duplicate but the referenced bug is a similar > bug on ARM64 but my report was for an armv6l which is a ARM32 machine. > The error message is: > /home/mknjc/jdk/src/hotspot/cpu/arm/vtableStubs_arm.cpp:91: undefined > reference to `Klass::vtable_start_offset()' > This is with the current jdk master branch. > > The fix for this also is as simple as it gets: > diff --git a/src/hotspot/cpu/arm/vtableStubs_arm.cpp > b/src/hotspot/cpu/arm/vtableStubs_arm.cpp > index 2c564b8189..f84e11662c 100644 > --- a/src/hotspot/cpu/arm/vtableStubs_arm.cpp > +++ b/src/hotspot/cpu/arm/vtableStubs_arm.cpp > @@ -32,6 +32,7 @@ > #include "oops/compiledICHolder.hpp" > #include "oops/instanceKlass.hpp" > #include "oops/klassVtable.hpp" > +#include "oops/klass.inline.hpp" > #include "runtime/sharedRuntime.hpp" > #include "vmreg_arm.inline.hpp" > #ifdef COMPILER2 > > So the bug isn't a duplicate and isn't fixed yet. Should I report a > new bug or could JDK-8233599 opened again? > > Thanks > Markus > From sgehwolf at redhat.com Wed Nov 6 09:47:58 2019 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Wed, 06 Nov 2019 10:47:58 +0100 Subject: RFR: 8230305: Cgroups v2: Container awareness In-Reply-To: References: <072f66ee8c44034831b4e38f6470da4bff6edd07.camel@redhat.com> <7540a208e306ab957032b18178a53c6afa105d33.camel@redhat.com> Message-ID: Hi Bob, On Tue, 2019-11-05 at 16:54 -0500, Bob Vandette wrote: > Severin, > > Thanks for taking on this cgroup v2 improvement. > > In general I like the implementation and the refactoring. The CachedMetric class is nice. > We can add any metric we want to cache in a more general way. > > Is this the latest version of the webrev? > > http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/03/webrev/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp.html > > It looks like you need to add the caching support for active_processor_count (JDK-8227006). No, my latest version is v04, but it's not been properly rebased on top of JDK-8227006, yet, as it hasn't been pushed at the time. Anyway, 04, already has caching for active_processor_count and avoids calling os::Linux::active_processor_count() unconditionally as in v03 at the expense of making CgroupSubsystem a friend class of Linux: http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/04/webrev/ I'll do a proper rebase ASAP. > I?m not sure it?s worth providing different strings for Unlimited versus Max or Scaled shares. > I?d just try to be compatible with the cgroupv2 output so you don?t have to change the test. OK. Will do. > I wonder if it?s worth trying to synthesize memory_max_usage_in_bytes() by keeping the highest > value ever returned by the API. Interesting idea. I'll ponder this a bit and get back to you. > Are you planning on adding cgroupv2 support for jdk.internal.platform.Metrics? Yes, I am. I wanted to get feedback on the hotspot parts first, though, so as to avoid needing to change both impls. In general the Java impls will be a mirrors of the hotspot versions of the API. > jdk/open/src//java.base/linux/classes/jdk/internal/platform/cgroupv2/Metrics.java > > This is needed for -XshowSettings:system and will eventually be needed for > a new Container MXBean. Yes, understood. Thanks for the review! Cheers, Severin > Bob. > > > > On Oct 18, 2019, at 12:24 PM, Severin Gehwolf wrote: > > > > On Tue, 2019-10-15 at 11:19 +0200, Severin Gehwolf wrote: > > > Hi, > > > > > > Please review this update to the container detection code which adds > > > cgroup version 2 support. Initial review of this started in [1]. Bob > > > preferred one big patch and didn't like the other refactoring in > > > os_linux so this has been dropped. > > > > > > This new patch includes both, the refactoring to move cgroup v1 > > > specific implementation out of osContainer_linux.{h,c}pp files[2] as > > > well as updated detection code and the implementation for cgroups > > > v2[3]. After this patch, osContainer_linux{h,c}pp files are cgroups > > > version agnostic. Implementations for specific versions are in > > > cgroupV{1,2}Subsystem.{c,h}pp files. Some shared, cgroup version > > > agnostic code is in cgroupSubsystem.{c,h}pp. > > > > > > Updated detection logic looks in /proc/cgroups first for hierarchy ids > > > of cpu/cpuset/cpuacct/memory controllers. If the hierarchy id for all > > > of them is 0 and they're all enabled, cgroups v2, unified hierarchy is > > > assumed. Otherwise it uses cgroups v1 controllers (also known as hybrid > > > or legacy hierarchy). Note that controllers can be only be mounted via > > > one or the other hierarchy, legacy (v1) or unified (v2) - exclusive[4]. > > > > > > Note that for the cgroups v2 cpu_shares() implementation a reverse > > > mapping is needed for the plain value exposed via cpu.weight. > > > Additionally, there doesn't seem to be an API exposed to use for an > > > implementation of memory_max_usage_in_bytes() in cgroups v2. Hence, it > > > returns OSCONTAINER_ERROR which is mapped to "not supported" elsewhere > > > in hotspot code. > > > > > > Once reviewed, I intend to push this in two changesets/bugs. One for > > > the refactoring work (no-op) as JDK-8230848. Another for the cgroupv2 > > > impl with JDK-8230305. > > > > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8230305 > > > > Rebased webrev on top of JDK-8232207: > > http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/03/webrev/ > > > > Thanks, > > Severin > > > > > Testing: tier1 tests on Linux x86_64. Docker/podman hotspot tests on > > > F30 with hybrid cgroup v1 hierarchy. Hotspot container tests on F31 > > > with unified hierarchy on Linux x86_64. jdk/submit. All pass. > > > > > > Thoughts? > > > > > > Thanks, > > > Severin > > > > > > [1] http://mail.openjdk.java.net/pipermail/hotspot-dev/2019-September/039605.html > > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2019-October/039708.html > > > [2] http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8230848/04/webrev/ > > > [3] http://cr.openjdk.java.net/~sgehwolf/webrevs/JDK-8230305/06/webrev/ > > > [4] https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html#mounting > > > "All controllers which support v2 and are not bound to a v1 hierarchy are > > > automatically bound to the v2 hierarchy and show up at the root. > > > Controllers which are not in active use in the v2 hierarchy can be > > > bound to other hierarchies." From martin.doerr at sap.com Wed Nov 6 11:43:32 2019 From: martin.doerr at sap.com (Doerr, Martin) Date: Wed, 6 Nov 2019 11:43:32 +0000 Subject: 8233328: fix minimal VM build on Linux s390x In-Reply-To: <8CED8B50-F788-43D8-BA49-24DFA368EF18@sap.com> References: <8CED8B50-F788-43D8-BA49-24DFA368EF18@sap.com> Message-ID: Hi Matthias, > One little thing: you could make array_equals() depend on #ifdef COMPILER2 as well. > That would save some more bytes. Please either protect all or no intrinsic implementations with #ifdef COMPILER2 (string_compress to string_indexof_char). sharedRuntime_s390.cpp: #include should be sorted alphabetically. Otherwise looks good to me. Thanks for fixing it. Best regards, Martin From: Schmidt, Lutz Sent: Dienstag, 5. November 2019 16:10 To: Baesken, Matthias ; 'hotspot-dev at openjdk.java.net' Cc: Doerr, Martin Subject: Re: 8233328: fix minimal VM build on Linux s390x Hi Matthias, your change looks even better now. Please note: I?m not a Reviewer. One little thing: you could make array_equals() depend on #ifdef COMPILER2 as well. That would save some more bytes. Thanks, Lutz From: "Baesken, Matthias" > Date: Tuesday, 5. November 2019 at 09:54 To: "hotspot-dev at openjdk.java.net" > Cc: "Doerr, Martin (martin.doerr at sap.com)" >, Lutz Schmidt > Subject: RE: RFR: 8233328: fix minimal VM build on Linux s390x Hello, here is another webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.2/ It adjusts the coding in macroAssembler_s390.cpp mentioned below . Best regards, Matthias From: Baesken, Matthias Sent: Montag, 4. November 2019 12:27 To: 'hotspot-dev at openjdk.java.net' > Subject: RFR: 8233328: fix minimal VM build on Linux s390x Hello, please review the following change that fixes the ?minimal VM? build on linuxs390x . While looking into the issue I noticed that MacroAssembler::generate_type_profiling ( in http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/src/hotspot/cpu/s390/macroAssembler_s390.cpp.frames.html ) Is ununsed so I removed it (probably it is a left over from old jdk8 (?) times ) . In 2929 void MacroAssembler::nmethod_UEP(Label& ic_miss) { 2930 #ifdef COMPILER2 2931 Register ic_reg = as_Register(Matcher::inline_cache_reg_encode()); 2932 #else 2933 Register ic_reg = as_Register(0); 2934 #endif We probably still need to replace as_Register(0); with something better , any suggestions ? Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8233328 http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/ Thanks, Matthias From shade at redhat.com Wed Nov 6 12:18:58 2019 From: shade at redhat.com (Aleksey Shipilev) Date: Wed, 6 Nov 2019 13:18:58 +0100 Subject: RFR (XS) 8233695: AArch64 build failures after -Wno-extra removal Message-ID: Bug: https://bugs.openjdk.java.net/browse/JDK-8233695 Fix: https://cr.openjdk.java.net/~shade/8233695/webrev.01/ This is the simplest fix I can think of in orderAccess: cast away constness. Also, trivially unused parameter in templateInterpreterGenerator_aarch64.cpp. The issues is exposed in 14, but it is actually there in all releases down to 8-aarch64. Testing: aarch64 build, tier1 (running) -- Thanks, -Aleksey From aph at redhat.com Wed Nov 6 12:33:29 2019 From: aph at redhat.com (Andrew Haley) Date: Wed, 6 Nov 2019 12:33:29 +0000 Subject: [aarch64-port-dev ] RFR (XS) 8233695: AArch64 build failures after -Wno-extra removal In-Reply-To: References: Message-ID: <46aa9d40-7353-c5b6-6bd9-4152eee0a3b8@redhat.com> On 11/6/19 12:18 PM, Aleksey Shipilev wrote: > Bug: > https://bugs.openjdk.java.net/browse/JDK-8233695 > > Fix: > https://cr.openjdk.java.net/~shade/8233695/webrev.01/ > > This is the simplest fix I can think of in orderAccess: cast away constness. Also, trivially unused > parameter in templateInterpreterGenerator_aarch64.cpp. The issues is exposed in 14, but it is > actually there in all releases down to 8-aarch64. It's better to use const_cast here. Otherwise OK, thanks. -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From shade at redhat.com Wed Nov 6 12:46:55 2019 From: shade at redhat.com (Aleksey Shipilev) Date: Wed, 6 Nov 2019 13:46:55 +0100 Subject: [aarch64-port-dev ] RFR (XS) 8233695: AArch64 build failures after -Wno-extra removal In-Reply-To: <46aa9d40-7353-c5b6-6bd9-4152eee0a3b8@redhat.com> References: <46aa9d40-7353-c5b6-6bd9-4152eee0a3b8@redhat.com> Message-ID: <9f35ecb2-83e6-bf04-5f91-a671fd65070f@redhat.com> On 11/6/19 1:33 PM, Andrew Haley wrote: > On 11/6/19 12:18 PM, Aleksey Shipilev wrote: >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8233695 >> >> Fix: >> https://cr.openjdk.java.net/~shade/8233695/webrev.01/ >> >> This is the simplest fix I can think of in orderAccess: cast away constness. Also, trivially unused >> parameter in templateInterpreterGenerator_aarch64.cpp. The issues is exposed in 14, but it is >> actually there in all releases down to 8-aarch64. > > It's better to use const_cast here. Otherwise OK, thanks. Right. Like this? https://cr.openjdk.java.net/~shade/8233695/webrev.02/ -- Thanks, -Aleksey From aph at redhat.com Wed Nov 6 13:14:15 2019 From: aph at redhat.com (Andrew Haley) Date: Wed, 6 Nov 2019 13:14:15 +0000 Subject: [aarch64-port-dev ] RFR (XS) 8233695: AArch64 build failures after -Wno-extra removal In-Reply-To: <9f35ecb2-83e6-bf04-5f91-a671fd65070f@redhat.com> References: <46aa9d40-7353-c5b6-6bd9-4152eee0a3b8@redhat.com> <9f35ecb2-83e6-bf04-5f91-a671fd65070f@redhat.com> Message-ID: <360b522d-ff3d-16af-b833-27b518e39fc3@redhat.com> On 11/6/19 12:46 PM, Aleksey Shipilev wrote: > Right. Like this? > https://cr.openjdk.java.net/~shade/8233695/webrev.02/ Exactly. -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From matthias.baesken at sap.com Wed Nov 6 13:19:18 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Wed, 6 Nov 2019 13:19:18 +0000 Subject: 8233328: fix minimal VM build on Linux s390x In-Reply-To: References: <8CED8B50-F788-43D8-BA49-24DFA368EF18@sap.com> Message-ID: Hi Martin, I changed the #ifdef around the intrinsics and moved the inclusion in sharedRuntime_s390.cpp . May I add you as a reviewer ? Best regards , Matthias From: Doerr, Martin Sent: Mittwoch, 6. November 2019 12:44 To: Schmidt, Lutz ; Baesken, Matthias ; 'hotspot-dev at openjdk.java.net' Subject: RE: 8233328: fix minimal VM build on Linux s390x Hi Matthias, > One little thing: you could make array_equals() depend on #ifdef COMPILER2 as well. > That would save some more bytes. Please either protect all or no intrinsic implementations with #ifdef COMPILER2 (string_compress to string_indexof_char). sharedRuntime_s390.cpp: #include should be sorted alphabetically. Otherwise looks good to me. Thanks for fixing it. Best regards, Martin From: Schmidt, Lutz > Sent: Dienstag, 5. November 2019 16:10 To: Baesken, Matthias >; 'hotspot-dev at openjdk.java.net' > Cc: Doerr, Martin > Subject: Re: 8233328: fix minimal VM build on Linux s390x Hi Matthias, your change looks even better now. Please note: I?m not a Reviewer. One little thing: you could make array_equals() depend on #ifdef COMPILER2 as well. That would save some more bytes. Thanks, Lutz From: "Baesken, Matthias" > Date: Tuesday, 5. November 2019 at 09:54 To: "hotspot-dev at openjdk.java.net" > Cc: "Doerr, Martin (martin.doerr at sap.com)" >, Lutz Schmidt > Subject: RE: RFR: 8233328: fix minimal VM build on Linux s390x Hello, here is another webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.2/ It adjusts the coding in macroAssembler_s390.cpp mentioned below . Best regards, Matthias From: Baesken, Matthias Sent: Montag, 4. November 2019 12:27 To: 'hotspot-dev at openjdk.java.net' > Subject: RFR: 8233328: fix minimal VM build on Linux s390x Hello, please review the following change that fixes the ?minimal VM? build on linuxs390x . While looking into the issue I noticed that MacroAssembler::generate_type_profiling ( in http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/src/hotspot/cpu/s390/macroAssembler_s390.cpp.frames.html ) Is ununsed so I removed it (probably it is a left over from old jdk8 (?) times ) . In 2929 void MacroAssembler::nmethod_UEP(Label& ic_miss) { 2930 #ifdef COMPILER2 2931 Register ic_reg = as_Register(Matcher::inline_cache_reg_encode()); 2932 #else 2933 Register ic_reg = as_Register(0); 2934 #endif We probably still need to replace as_Register(0); with something better , any suggestions ? Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8233328 http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/ Thanks, Matthias From martin.doerr at sap.com Wed Nov 6 13:22:38 2019 From: martin.doerr at sap.com (Doerr, Martin) Date: Wed, 6 Nov 2019 13:22:38 +0000 Subject: 8233328: fix minimal VM build on Linux s390x In-Reply-To: References: <8CED8B50-F788-43D8-BA49-24DFA368EF18@sap.com> Message-ID: Hi Matthias, > I changed the #ifdef around the intrinsics and moved the inclusion in sharedRuntime_s390.cpp. Thanks. I don?t need to see another webrev for that. > May I add you as a reviewer ? Sure. Best regards, Martin From: Baesken, Matthias Sent: Mittwoch, 6. November 2019 14:19 To: Doerr, Martin ; Schmidt, Lutz ; 'hotspot-dev at openjdk.java.net' Subject: RE: 8233328: fix minimal VM build on Linux s390x Hi Martin, I changed the #ifdef around the intrinsics and moved the inclusion in sharedRuntime_s390.cpp . May I add you as a reviewer ? Best regards , Matthias From: Doerr, Martin > Sent: Mittwoch, 6. November 2019 12:44 To: Schmidt, Lutz >; Baesken, Matthias >; 'hotspot-dev at openjdk.java.net' > Subject: RE: 8233328: fix minimal VM build on Linux s390x Hi Matthias, > One little thing: you could make array_equals() depend on #ifdef COMPILER2 as well. > That would save some more bytes. Please either protect all or no intrinsic implementations with #ifdef COMPILER2 (string_compress to string_indexof_char). sharedRuntime_s390.cpp: #include should be sorted alphabetically. Otherwise looks good to me. Thanks for fixing it. Best regards, Martin From: Schmidt, Lutz > Sent: Dienstag, 5. November 2019 16:10 To: Baesken, Matthias >; 'hotspot-dev at openjdk.java.net' > Cc: Doerr, Martin > Subject: Re: 8233328: fix minimal VM build on Linux s390x Hi Matthias, your change looks even better now. Please note: I?m not a Reviewer. One little thing: you could make array_equals() depend on #ifdef COMPILER2 as well. That would save some more bytes. Thanks, Lutz From: "Baesken, Matthias" > Date: Tuesday, 5. November 2019 at 09:54 To: "hotspot-dev at openjdk.java.net" > Cc: "Doerr, Martin (martin.doerr at sap.com)" >, Lutz Schmidt > Subject: RE: RFR: 8233328: fix minimal VM build on Linux s390x Hello, here is another webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.2/ It adjusts the coding in macroAssembler_s390.cpp mentioned below . Best regards, Matthias From: Baesken, Matthias Sent: Montag, 4. November 2019 12:27 To: 'hotspot-dev at openjdk.java.net' > Subject: RFR: 8233328: fix minimal VM build on Linux s390x Hello, please review the following change that fixes the ?minimal VM? build on linuxs390x . While looking into the issue I noticed that MacroAssembler::generate_type_profiling ( in http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/src/hotspot/cpu/s390/macroAssembler_s390.cpp.frames.html ) Is ununsed so I removed it (probably it is a left over from old jdk8 (?) times ) . In 2929 void MacroAssembler::nmethod_UEP(Label& ic_miss) { 2930 #ifdef COMPILER2 2931 Register ic_reg = as_Register(Matcher::inline_cache_reg_encode()); 2932 #else 2933 Register ic_reg = as_Register(0); 2934 #endif We probably still need to replace as_Register(0); with something better , any suggestions ? Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8233328 http://cr.openjdk.java.net/~mbaesken/webrevs/8233328.1/ Thanks, Matthias From ramkumar.sunderbabu at oracle.com Thu Nov 7 09:34:31 2019 From: ramkumar.sunderbabu at oracle.com (Ramkumar Sunderbabu) Date: Thu, 7 Nov 2019 01:34:31 -0800 (PST) Subject: RFR(S) : 8228448 : Jconsole can't connect to itself Message-ID: <25a1a15d-82ed-4cc8-a04c-8d53f97e7e10@default> Hi all, Please review this small patch. Added "-Djdk.attach.allowAttachSelf=true" to jconsole launcher make file. This will allow jconsole to connect to itself. JBS: https://bugs.openjdk.java.net/browse/JDK-8228448 Webrev: http://cr.openjdk.java.net/~vaibhav/8228448/webrev.00/ Testing: tested with jconsole Thanks, Ram From david.holmes at oracle.com Thu Nov 7 09:52:26 2019 From: david.holmes at oracle.com (David Holmes) Date: Thu, 7 Nov 2019 19:52:26 +1000 Subject: RFR(S) : 8228448 : Jconsole can't connect to itself In-Reply-To: <25a1a15d-82ed-4cc8-a04c-8d53f97e7e10@default> References: <25a1a15d-82ed-4cc8-a04c-8d53f97e7e10@default> Message-ID: Hi Ram, This is a build-dev issue not a hotspot-dev issue. Cheers, David On 7/11/2019 7:34 pm, Ramkumar Sunderbabu wrote: > Hi all, > > Please review this small patch. > > Added "-Djdk.attach.allowAttachSelf=true" to jconsole launcher make file. This will allow jconsole to connect to itself. > > > > JBS: https://bugs.openjdk.java.net/browse/JDK-8228448 > > Webrev: http://cr.openjdk.java.net/~vaibhav/8228448/webrev.00/ > > Testing: tested with jconsole > > > > Thanks, > > Ram > > > From per.liden at oracle.com Thu Nov 7 13:31:07 2019 From: per.liden at oracle.com (Per Liden) Date: Thu, 7 Nov 2019 14:31:07 +0100 Subject: RFR: 8233793: ZGC: Incorrect type used in ZBarrierSetC2 clone_type() Message-ID: Please review this semi-urgent fix (causes lots of failures in tier3). JDK-8233783 incorrectly changed the type from TypeInt::INT to TypeLong::LONG. We should change that back to TypeInt::INT. Bug: https://bugs.openjdk.java.net/browse/JDK-8233793 Webrev: http://cr.openjdk.java.net/~pliden/8233793/webrev.0 /Per From erik.osterlund at oracle.com Thu Nov 7 13:44:37 2019 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Thu, 7 Nov 2019 14:44:37 +0100 Subject: RFR: 8233793: ZGC: Incorrect type used in ZBarrierSetC2 clone_type() In-Reply-To: References: Message-ID: <77577809-d081-8538-7352-2482a25e7790@oracle.com> Hi Per, Looks good, and IMO trivial. Ship it. Thanks, /Erik On 2019-11-07 14:31, Per Liden wrote: > Please review this semi-urgent fix (causes lots of failures in tier3). > > JDK-8233783 incorrectly changed the type from TypeInt::INT to > TypeLong::LONG. We should change that back to TypeInt::INT. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8233793 > Webrev: http://cr.openjdk.java.net/~pliden/8233793/webrev.0 > > /Per From per.liden at oracle.com Thu Nov 7 13:49:49 2019 From: per.liden at oracle.com (Per Liden) Date: Thu, 7 Nov 2019 14:49:49 +0100 Subject: RFR: 8233793: ZGC: Incorrect type used in ZBarrierSetC2 clone_type() In-Reply-To: <77577809-d081-8538-7352-2482a25e7790@oracle.com> References: <77577809-d081-8538-7352-2482a25e7790@oracle.com> Message-ID: <7aa63995-179c-852f-9b2d-47d481006975@oracle.com> Thanks Erik! Since this is causing lots of failures in our CI, I'll push this immediately. cheers, Per On 11/7/19 2:44 PM, Erik ?sterlund wrote: > Hi Per, > > Looks good, and IMO trivial. Ship it. > > Thanks, > /Erik > > On 2019-11-07 14:31, Per Liden wrote: >> Please review this semi-urgent fix (causes lots of failures in tier3). >> >> JDK-8233783 incorrectly changed the type from TypeInt::INT to >> TypeLong::LONG. We should change that back to TypeInt::INT. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8233793 >> Webrev: http://cr.openjdk.java.net/~pliden/8233793/webrev.0 >> >> /Per > From lutz.schmidt at sap.com Thu Nov 7 15:59:07 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Thu, 7 Nov 2019 15:59:07 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes Message-ID: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> Dear all, may I please request reviews for this cleanup? It's a lot of files with just some #include statement changes. That makes the review process tedious and not very challenging intellectually. Anyway, your effort is very much appreciated! jdk/submit results pending. Bug: https://bugs.openjdk.java.net/browse/JDK-8233787 Webrev: http://cr.openjdk.java.net/~lucy/webrevs/8233787.00/ Thank you! Lutz From vladimir.kozlov at oracle.com Thu Nov 7 17:34:58 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 7 Nov 2019 09:34:58 -0800 Subject: RFR: JDK-8232684: Make switch expressions final In-Reply-To: References: Message-ID: <0c60f044-4d5b-8fc4-24ca-4da872be6333@oracle.com> HotSpot tests changes (using Thread.yield()) look good. Thanks, Vladimir On 11/5/19 1:50 AM, Jan Lahoda wrote: > I've missed updates to some hotspot and jdk tests in the first patch. The problem are unqualified > invocations of Thread.yield(), which are no longer allowed (from the spec: JLS 3.9 Keywords: "All > invocations of a method called yield must be qualified so as to be distinguished from a yield > statement."). Therefore, these invocations need to be changed to qualified invocations (i.e. > Thread.yield()). Updates to the hotspot and jdk tests are here: > http://cr.openjdk.java.net/~jlahoda/8232684/webrev.delta.00.01/ > > Full updated webrev, with changes to both javac and the tests, is here: > http://cr.openjdk.java.net/~jlahoda/8232684/webrev.01/ > > How does that look? > > Thanks, > ??? Jan > > On 21. 10. 19 16:17, Maurizio Cimadamore wrote: >> Looks generally good -? went through the test updates one by one and they look ok, except this: >> >> http://cr.openjdk.java.net/~jlahoda/8232684/webrev.00/test/langtools/tools/jdeps/listdeps/ListModuleDeps.java.udiff.html >> >> >> Which you explained to me offline (we need to change this code every time the compiler stops using >> the @Preview annotation - ugh). Nothing specific to this webrev, so I approve. >> >> Maurizio >> >> On 21/10/2019 14:49, Jan Lahoda wrote: >>> Hi, >>> >>> As part of preparation for proposing JEP 361: Switch Expressions (Standard) to target, I would >>> like to ask for a review of the patch to make switch expression a non-preview feature in javac: >>> http://cr.openjdk.java.net/~jlahoda/8232684/webrev.00/ >>> >>> The patch basically removes the feature from the list of preview features, updates test to this >>> new state (removes --enable-preview from associated tests, and adjusts their expected output), >>> and removes the @PreviewFeature annotation and associated text from the javadoc of the Trees API. >>> >>> I also would like to ask for a review for the CSR associated with that: >>> https://bugs.openjdk.java.net/browse/JDK-8232685 >>> >>> Reviews/comments on either of these would be very welcome! >>> >>> JBS: https://bugs.openjdk.java.net/browse/JDK-8232684 >>> >>> Thanks! >>> >>> Jan From kim.barrett at oracle.com Thu Nov 7 19:49:57 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 7 Nov 2019 14:49:57 -0500 Subject: RFR: 8233364: Fix undefined behavior in Canonicalizer::do_ShiftOp In-Reply-To: References: Message-ID: Ping. Looking for a second review. > On Nov 1, 2019, at 8:07 PM, Kim Barrett wrote: > > Please review this change to Canonicalizer::do_ShiftOp to eliminate > several actual or potential invocations of undefined behavior > involving shift operations. (See CR for details.) > > To support this fix, a set of java_shift_xxx functions are added to > globalDefinitions.hpp. These use the same implementation techniques > used by java_add and friends to perform the corresponding operation > with Java semantics for handling overflows and such. > > With these new java_shift_xxx functions available, the constant > folding by do_ShiftOp is now trivially implemented by calls to those > functions. > > Added gtest-based unit tests covering the new shift functions. Also > added unit tests for java_add and friends, which should have been part > of their addition by JDK-8145096. (Oops!) > > CR: > https://bugs.openjdk.java.net/browse/JDK-8233364 > > Webrev: > https://cr.openjdk.java.net/~kbarrett/8233364/open.00/ > > Testing: > mach5 tier1-3. From vladimir.kozlov at oracle.com Thu Nov 7 19:59:14 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 7 Nov 2019 11:59:14 -0800 Subject: RFR: 8233364: Fix undefined behavior in Canonicalizer::do_ShiftOp In-Reply-To: References: Message-ID: Looks good. thanks, Vladimir On 11/7/19 11:49 AM, Kim Barrett wrote: > Ping. Looking for a second review. > >> On Nov 1, 2019, at 8:07 PM, Kim Barrett wrote: >> >> Please review this change to Canonicalizer::do_ShiftOp to eliminate >> several actual or potential invocations of undefined behavior >> involving shift operations. (See CR for details.) >> >> To support this fix, a set of java_shift_xxx functions are added to >> globalDefinitions.hpp. These use the same implementation techniques >> used by java_add and friends to perform the corresponding operation >> with Java semantics for handling overflows and such. >> >> With these new java_shift_xxx functions available, the constant >> folding by do_ShiftOp is now trivially implemented by calls to those >> functions. >> >> Added gtest-based unit tests covering the new shift functions. Also >> added unit tests for java_add and friends, which should have been part >> of their addition by JDK-8145096. (Oops!) >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8233364 >> >> Webrev: >> https://cr.openjdk.java.net/~kbarrett/8233364/open.00/ >> >> Testing: >> mach5 tier1-3. > > From kim.barrett at oracle.com Thu Nov 7 21:03:29 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 7 Nov 2019 16:03:29 -0500 Subject: RFR: 8233364: Fix undefined behavior in Canonicalizer::do_ShiftOp In-Reply-To: References: Message-ID: <2FA95A72-B4D9-4915-B9E3-27713937D2BA@oracle.com> > On Nov 7, 2019, at 2:59 PM, Vladimir Kozlov wrote: > > Looks good. > > thanks, > Vladimir Thanks. > > On 11/7/19 11:49 AM, Kim Barrett wrote: >> Ping. Looking for a second review. >>> On Nov 1, 2019, at 8:07 PM, Kim Barrett wrote: >>> >>> Please review this change to Canonicalizer::do_ShiftOp to eliminate >>> several actual or potential invocations of undefined behavior >>> involving shift operations. (See CR for details.) >>> >>> To support this fix, a set of java_shift_xxx functions are added to >>> globalDefinitions.hpp. These use the same implementation techniques >>> used by java_add and friends to perform the corresponding operation >>> with Java semantics for handling overflows and such. >>> >>> With these new java_shift_xxx functions available, the constant >>> folding by do_ShiftOp is now trivially implemented by calls to those >>> functions. >>> >>> Added gtest-based unit tests covering the new shift functions. Also >>> added unit tests for java_add and friends, which should have been part >>> of their addition by JDK-8145096. (Oops!) >>> >>> CR: >>> https://bugs.openjdk.java.net/browse/JDK-8233364 >>> >>> Webrev: >>> https://cr.openjdk.java.net/~kbarrett/8233364/open.00/ >>> >>> Testing: >>> mach5 tier1-3. From sgehwolf at redhat.com Fri Nov 8 14:21:05 2019 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Fri, 08 Nov 2019 15:21:05 +0100 Subject: RFR: 8230305: Cgroups v2: Container awareness In-Reply-To: References: <072f66ee8c44034831b4e38f6470da4bff6edd07.camel@redhat.com> <7540a208e306ab957032b18178a53c6afa105d33.camel@redhat.com> Message-ID: Hi Bob, On Wed, 2019-11-06 at 10:47 +0100, Severin Gehwolf wrote: > On Tue, 2019-11-05 at 16:54 -0500, Bob Vandette wrote: > > Severin, > > > > Thanks for taking on this cgroup v2 improvement. > > > > In general I like the implementation and the refactoring. The CachedMetric class is nice. > > We can add any metric we want to cache in a more general way. > > > > Is this the latest version of the webrev? > > > > http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/03/webrev/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp.html > > > > It looks like you need to add the caching support for active_processor_count (JDK-8227006). > [...] > I'll do a proper rebase ASAP. Latest webrev: http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/05/webrev/ > > I?m not sure it?s worth providing different strings for Unlimited versus Max or Scaled shares. > > I?d just try to be compatible with the cgroupv2 output so you don?t have to change the test. > > OK. Will do. Unfortunately, there is no way of NOT changing TestCPUAwareness.java as it expects CPU Shares to be written to the cgroup filesystem verbatim. That's no longer the case for cgroups v2 (at least for crun). Either way, most test changes are gone now. > > I wonder if it?s worth trying to synthesize memory_max_usage_in_bytes() by keeping the highest > > value ever returned by the API. > > Interesting idea. I'll ponder this a bit and get back to you. This has been implemented. I'm not sure this is correct, though. It merely piggy-backs on calls to memory_usage_in_bytes() and keeps the high watermark value of that. Testing passed on F31 with cgroups v2 controllers properly configured (podman) and hybrid (legacy hierarchy) with docker/podman. Thoughts? Thanks, Severin From kim.barrett at oracle.com Sat Nov 9 01:58:03 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 8 Nov 2019 20:58:03 -0500 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> Message-ID: > On Nov 7, 2019, at 10:59 AM, Schmidt, Lutz wrote: > > Dear all, > > may I please request reviews for this cleanup? It's a lot of files with just some #include statement changes. That makes the review process tedious and not very challenging intellectually. > > Anyway, your effort is very much appreciated! > > jdk/submit results pending. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8233787 > Webrev: http://cr.openjdk.java.net/~lucy/webrevs/8233787.00/ > > Thank you! > Lutz I don't think this is the right approach. It makes all the vm_version_.hpp files not be stand alone, which I think is not a good idea. I thik the real problem is that Abstract_VM_Version is declared in vm_version.hpp. I think that file should be split into abstract_vm_version.hpp (with most of what's currently in vm_version.hpp), with vm_version.hpp being just (untested) #ifndef SHARE_RUNTIME_VM_VERSION_HPP #define SHARE_RUNTIME_VM_VERSION_HPP #include "utilities/macros.hpp" #include CPU_HEADER(vm_version) #endif // SHARE_RUNTIME_VM_VERSION_HPP Change all the vm_version_.hpp files #include abstract_vm_version.hpp rather than vm_version.hpp. Other than in vm_version_.hpp files, always #include vm_version.hpp. From david.holmes at oracle.com Mon Nov 11 10:54:47 2019 From: david.holmes at oracle.com (David Holmes) Date: Mon, 11 Nov 2019 20:54:47 +1000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> Message-ID: Also note we have an open RFE to try and fix the VM_Version vs Abstract_VM_version mess. But it's such a mess it keeps getting deferred. David On 9/11/2019 11:58 am, Kim Barrett wrote: >> On Nov 7, 2019, at 10:59 AM, Schmidt, Lutz wrote: >> >> Dear all, >> >> may I please request reviews for this cleanup? It's a lot of files with just some #include statement changes. That makes the review process tedious and not very challenging intellectually. >> >> Anyway, your effort is very much appreciated! >> >> jdk/submit results pending. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8233787 >> Webrev: http://cr.openjdk.java.net/~lucy/webrevs/8233787.00/ >> >> Thank you! >> Lutz > > I don't think this is the right approach. It makes all the > vm_version_.hpp files not be stand alone, which I think is not a > good idea. > > I thik the real problem is that Abstract_VM_Version is declared in > vm_version.hpp. I think that file should be split into > abstract_vm_version.hpp (with most of what's currently in > vm_version.hpp), with vm_version.hpp being just (untested) > > > #ifndef SHARE_RUNTIME_VM_VERSION_HPP > #define SHARE_RUNTIME_VM_VERSION_HPP > > #include "utilities/macros.hpp" > #include CPU_HEADER(vm_version) > > #endif // SHARE_RUNTIME_VM_VERSION_HPP > > > Change all the vm_version_.hpp files #include > abstract_vm_version.hpp rather than vm_version.hpp. > > Other than in vm_version_.hpp files, always #include > vm_version.hpp. > From felix.yang at huawei.com Mon Nov 11 12:01:24 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Mon, 11 Nov 2019 12:01:24 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> Message-ID: > -----Original Message----- > From: Andrew Haley [mailto:aph at redhat.com] > Sent: Monday, November 11, 2019 7:17 PM > To: Yangfei (Felix) ; > aarch64-port-dev at openjdk.java.net > Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic > operations > > On 11/5/19 6:20 AM, Yangfei (Felix) wrote: > > Please review this small improvements of aarch64 atomic operations. > > This eliminates the use of full memory barriers. > > Passed tier1-3 testing. > > No, rejected. > > Patch also must go to hotspot-dev. CCing to hotspot-dev. > Are you sure this is safe? The HotSpot internal barriers are specified as being > full two-way barriers, which these are not. Tier1 testing really isn't going to do > it. Now, you might argue that none of the uses in HotSpot actually require > anything stronger that acq/rel, but good luck proving that. I was also curious about the reason why full memory barrier is used here. For add_and_fetch, I was thinking that there is no difference in functionality for the following two code snippet. It's interesting to know that this may make a difference. Can you elaborate more on that please? 1) without patch .L2: ldxr x2, [x1] add x2, x2, x0 stlxr w3, x2, [x1] cbnz w3, .L2 dmb ish mov x0, x2 ret ----------------------------------------------- 2) with patch .L2: ldaxr x2, [x1] add x2, x2, x0 stlxr w3, x2, [x1] cbnz w3, .L2 mov x0, x2 ret From felix.yang at huawei.com Mon Nov 11 12:44:03 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Mon, 11 Nov 2019 12:44:03 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> Message-ID: > -----Original Message----- > From: Yangfei (Felix) > Sent: Monday, November 11, 2019 8:01 PM > To: 'Andrew Haley' ; aarch64-port-dev at openjdk.java.net > Cc: 'hotspot-dev at openjdk.java.net' > Subject: RE: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic > operations > > > -----Original Message----- > > From: Andrew Haley [mailto:aph at redhat.com] > > Sent: Monday, November 11, 2019 7:17 PM > > To: Yangfei (Felix) ; > > aarch64-port-dev at openjdk.java.net > > Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of > > atomic operations > > > > On 11/5/19 6:20 AM, Yangfei (Felix) wrote: > > > Please review this small improvements of aarch64 atomic operations. > > > This eliminates the use of full memory barriers. > > > Passed tier1-3 testing. > > > > No, rejected. > > > > Patch also must go to hotspot-dev. > > CCing to hotspot-dev. > > > Are you sure this is safe? The HotSpot internal barriers are specified > > as being full two-way barriers, which these are not. Tier1 testing > > really isn't going to do it. Now, you might argue that none of the > > uses in HotSpot actually require anything stronger that acq/rel, but good luck > proving that. > > I was also curious about the reason why full memory barrier is used here. > For add_and_fetch, I was thinking that there is no difference in functionality for > the following two code snippet. > It's interesting to know that this may make a difference. Can you elaborate > more on that please? > > 1) without patch > .L2: > ldxr x2, [x1] > add x2, x2, x0 > stlxr w3, x2, [x1] > cbnz w3, .L2 > dmb ish > mov x0, x2 > ret > ----------------------------------------------- > 2) with patch > .L2: > ldaxr x2, [x1] > add x2, x2, x0 > stlxr w3, x2, [x1] > cbnz w3, .L2 > mov x0, x2 > ret And looks like the aarch64 port from Oracle also did the same thing: http://hg.openjdk.java.net/jdk-updates/jdk11u-dev/file/f8b2e95a1d41/src/hotspot/os_cpu/linux_arm/atomic_linux_arm.hpp template struct Atomic::PlatformAdd : Atomic::AddAndFetch > { template D add_and_fetch(I add_value, D volatile* dest, atomic_memory_order order) const; }; template<> template inline D Atomic::PlatformAdd<4>::add_and_fetch(I add_value, D volatile* dest, atomic_memory_order order) const { STATIC_ASSERT(4 == sizeof(I)); STATIC_ASSERT(4 == sizeof(D)); #ifdef AARCH64 D val; int tmp; __asm__ volatile( "1:\n\t" " ldaxr %w[val], [%[dest]]\n\t" " add %w[val], %w[val], %w[add_val]\n\t" " stlxr %w[tmp], %w[val], [%[dest]]\n\t" " cbnz %w[tmp], 1b\n\t" : [val] "=&r" (val), [tmp] "=&r" (tmp) : [add_val] "r" (add_value), [dest] "r" (dest) : "memory"); return val; #else return add_using_helper(os::atomic_add_func, add_value, dest); #endif } From aph at redhat.com Mon Nov 11 15:05:10 2019 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 Nov 2019 15:05:10 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> Message-ID: <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> On 11/11/19 12:01 PM, Yangfei (Felix) wrote: > I was also curious about the reason why full memory barrier is used > here. For add_and_fetch, I was thinking that there is no difference > in functionality for the following two code snippet. It's > interesting to know that this may make a difference. Can you > elaborate more on that please? For add_and_fetch the default atomic_memory_order is memory_order_conservative. I'm not sure exactly what that means, but it is stronger than SEQ_CST; it's been described as a "full barrier". __ATOMIC_ACQ_REL for this operation translates approximately to load LoadLoad|LoadStore add StoreStore|LoadStore store In other words, there is nothing to prevent subsequent stores being reordered with this store. Therefore your change does not meet the specification for memory_order_conservative. You could, if you wanted, only make this change for weaker memory orderings, but AFAIK they are not used. You could argue that AArch64 won't do such a reordering, but I'd reply that even if AArch64 can't do such a reordering, GCC sure can. And finally, is there any operation in HotSpot that actually requires such strong memory semantics? Probably not, but no-one has ever been brave enough to say so. -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Mon Nov 11 15:06:23 2019 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 Nov 2019 15:06:23 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> Message-ID: <98122035-8872-c77d-5309-b68f07dcaddb@redhat.com> On 11/11/19 12:44 PM, Yangfei (Felix) wrote: > And looks like the aarch64 port from Oracle also did the same thing: > http://hg.openjdk.java.net/jdk-updates/jdk11u-dev/file/f8b2e95a1d41/src/hotspot/os_cpu/linux_arm/atomic_linux_arm.hpp That's not the same thing at all, it's fully SEQ_CST. Which is almost certainly enough, but still doesn't meet spec. -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From aph at redhat.com Mon Nov 11 16:36:38 2019 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 Nov 2019 16:36:38 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> Message-ID: <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> On 11/11/19 3:05 PM, Andrew Haley wrote: > And finally, is there any operation in HotSpot that actually requires > such strong memory semantics? Probably not, but no-one has ever been > brave enough to say so. Here's a place where it really does matter. void ShenandoahPacer::restart_with(size_t non_taxable_bytes, double tax_rate) { size_t initial = (size_t)(non_taxable_bytes * tax_rate) >> LogHeapWordSize; STATIC_ASSERT(sizeof(size_t) <= sizeof(intptr_t)); Atomic::xchg((intptr_t)initial, &_budget); Atomic::store(tax_rate, &_tax_rate); Atomic::inc(&_epoch); Note: the xchg is conservative, the store is plain. The xchg value should be visible before the store. -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From lutz.schmidt at sap.com Mon Nov 11 16:56:50 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Mon, 11 Nov 2019 16:56:50 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> Message-ID: Oh, oh, looks like I stepped into a beehive... Found JDK-8202579 and JDK-8145956 talking about the unwanted use of Abstract_VM_Version. My intended change would not tackle that "mess", as you call it. But it would make potential future cleanups a little bit easier by ensuring all of hotspot code only includes vm_version.hpp. I'm in the process of modifying my initial change to reflect Kim's suggestions. I'll send it out Tuesday (hopefully), Wednesday the latest. Regards, Lutz ?On 11.11.19, 11:54, "David Holmes" wrote: Also note we have an open RFE to try and fix the VM_Version vs Abstract_VM_version mess. But it's such a mess it keeps getting deferred. David On 9/11/2019 11:58 am, Kim Barrett wrote: >> On Nov 7, 2019, at 10:59 AM, Schmidt, Lutz wrote: >> >> Dear all, >> >> may I please request reviews for this cleanup? It's a lot of files with just some #include statement changes. That makes the review process tedious and not very challenging intellectually. >> >> Anyway, your effort is very much appreciated! >> >> jdk/submit results pending. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8233787 >> Webrev: http://cr.openjdk.java.net/~lucy/webrevs/8233787.00/ >> >> Thank you! >> Lutz > > I don't think this is the right approach. It makes all the > vm_version_.hpp files not be stand alone, which I think is not a > good idea. > > I thik the real problem is that Abstract_VM_Version is declared in > vm_version.hpp. I think that file should be split into > abstract_vm_version.hpp (with most of what's currently in > vm_version.hpp), with vm_version.hpp being just (untested) > > > #ifndef SHARE_RUNTIME_VM_VERSION_HPP > #define SHARE_RUNTIME_VM_VERSION_HPP > > #include "utilities/macros.hpp" > #include CPU_HEADER(vm_version) > > #endif // SHARE_RUNTIME_VM_VERSION_HPP > > > Change all the vm_version_.hpp files #include > abstract_vm_version.hpp rather than vm_version.hpp. > > Other than in vm_version_.hpp files, always #include > vm_version.hpp. > From erik.osterlund at oracle.com Mon Nov 11 17:11:28 2019 From: erik.osterlund at oracle.com (=?utf-8?Q?Erik_=C3=96sterlund?=) Date: Mon, 11 Nov 2019 18:11:28 +0100 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: Message-ID: Hi Felix, Would uou mind pasting a link to the proposed change? I can not determine its validity otherwise. Thanks, /Erik > On 11 Nov 2019, at 13:01, Yangfei (Felix) wrote: > > ? >> >> -----Original Message----- >> From: Andrew Haley [mailto:aph at redhat.com] >> Sent: Monday, November 11, 2019 7:17 PM >> To: Yangfei (Felix) ; >> aarch64-port-dev at openjdk.java.net >> Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic >> operations >> >>> On 11/5/19 6:20 AM, Yangfei (Felix) wrote: >>> Please review this small improvements of aarch64 atomic operations. >>> This eliminates the use of full memory barriers. >>> Passed tier1-3 testing. >> >> No, rejected. >> >> Patch also must go to hotspot-dev. > > CCing to hotspot-dev. > >> Are you sure this is safe? The HotSpot internal barriers are specified as being >> full two-way barriers, which these are not. Tier1 testing really isn't going to do >> it. Now, you might argue that none of the uses in HotSpot actually require >> anything stronger that acq/rel, but good luck proving that. > > I was also curious about the reason why full memory barrier is used here. > For add_and_fetch, I was thinking that there is no difference in functionality for the following two code snippet. > It's interesting to know that this may make a difference. Can you elaborate more on that please? > > 1) without patch > .L2: > ldxr x2, [x1] > add x2, x2, x0 > stlxr w3, x2, [x1] > cbnz w3, .L2 > dmb ish > mov x0, x2 > ret > ----------------------------------------------- > 2) with patch > .L2: > ldaxr x2, [x1] > add x2, x2, x0 > stlxr w3, x2, [x1] > cbnz w3, .L2 > mov x0, x2 > ret From aph at redhat.com Mon Nov 11 17:53:06 2019 From: aph at redhat.com (Andrew Haley) Date: Mon, 11 Nov 2019 17:53:06 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: Message-ID: <4455d529-0f43-e6ba-d3d8-2639f4d79802@redhat.com> On 11/11/19 5:11 PM, Erik ?sterlund wrote: > Hi Felix, > > Would uou mind pasting a link to the proposed change? I can not determine its validity otherwise. Patch: diff -r 2700c409ff10 src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp --- a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Sun Nov 03 18:02:29 2019 -0500 +++ b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Wed Nov 06 14:13:00 2019 +0800 @@ -40,8 +40,7 @@ { template D add_and_fetch(I add_value, D volatile* dest, atomic_memory_order order) const { - D res = __atomic_add_fetch(dest, add_value, __ATOMIC_RELEASE); - FULL_MEM_BARRIER; + D res = __atomic_add_fetch(dest, add_value, __ATOMIC_ACQ_REL); return res; } }; @@ -52,8 +51,7 @@ T volatile* dest, atomic_memory_order order) const { STATIC_ASSERT(byte_size == sizeof(T)); - T res = __sync_lock_test_and_set(dest, exchange_value); - FULL_MEM_BARRIER; + T res = __atomic_exchange_n(dest, exchange_value, __ATOMIC_ACQ_REL); return res; } -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From coleen.phillimore at oracle.com Mon Nov 11 19:59:55 2019 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Mon, 11 Nov 2019 14:59:55 -0500 Subject: RFR (M) 8233913: Remove implicit conversion from Method* to methodHandle Message-ID: Summary: Fix call sites to use existing THREAD local or pass down THREAD local for shallower callsites. Make linkResolver methods return Method* for caller to handleize if needed. There are a small number of changes to several files, mostly obvious.? Some comments on a few of the specific changes: http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/aot/aotCompiledMethod.cpp.udiff.html http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/code/nmethod.cpp.udiff.html The comment and the methodHandle don't make sense since there'a s NSV there.? Method* will not be reclaimed ever, and it doesn't move.? There might have been a safepoint here once. http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/ci/ciMethod.cpp.udiff.html If you have a methodHandle, you don't need to do mh()->max_stack().?? The -> operator will expose the underlying Method*. http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/classfile/javaClasses.cpp.udiff.html The Method* in the vframeStream is safe here because the stack frame are followed in case of safepoint, and will mark the Method* as live, so this was unnecessary. http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/compiler/tieredThresholdPolicy.hpp.udiff.html I changed several of the TieredThresholdPolicy functions to take const methodHandle as a parameter to avoid unhandleizing and rehandleizing, and avoid Thread::current() calls. http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/interpreter/linkResolver.hpp.udiff.html I changed LinkResolver methods to return Method* to avoid unnecessary handlizing.? The handle copy is elided by most compilers but it was still not needed by many callers. Tested with tier1 on all Oracle platforms, and tier 2-3 on linux-x64. I also performance tested this with slight avg 0.5% improvement, and fewer instructions: eg: PerfStartup-Noop instructions on linux-x64 (before/after) 0.49% 149943356.15 ? 262156.46 149213135.00 ? 281141.80 p = 0.000 open webrev at http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev bug link https://bugs.openjdk.java.net/browse/JDK-8233913 thanks, Coleen From kim.barrett at oracle.com Mon Nov 11 20:52:39 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Mon, 11 Nov 2019 15:52:39 -0500 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> Message-ID: <8EEE22E0-E8CB-4C21-A2EE-C27338BD3D12@oracle.com> > On Nov 11, 2019, at 11:56 AM, Schmidt, Lutz wrote: > > Oh, oh, > looks like I stepped into a beehive... Found JDK-8202579 and JDK-8145956 talking about the unwanted use of Abstract_VM_Version. > > My intended change would not tackle that "mess", as you call it. But it would make potential future cleanups a little bit easier by ensuring all of hotspot code only includes vm_version.hpp. I'm in the process of modifying my initial change to reflect Kim's suggestions. I'll send it out Tuesday (hopefully), Wednesday the latest. JDK-7041262 was fixed a while ago, though I found that a new reference to Abstract_VM_Version was recently added: https://bugs.openjdk.java.net/browse/JDK-8233943 I agree that JDK-8202579 looks like it might be easier to deal with after a change to break include cycles. (BTW, I hadn?t noticed the include cycles before; good find!) From felix.yang at huawei.com Tue Nov 12 02:57:37 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Tue, 12 Nov 2019 02:57:37 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> Message-ID: > On 11/11/19 3:05 PM, Andrew Haley wrote: > > And finally, is there any operation in HotSpot that actually requires > > such strong memory semantics? Probably not, but no-one has ever been > > brave enough to say so. > > Here's a place where it really does matter. > > void ShenandoahPacer::restart_with(size_t non_taxable_bytes, double > tax_rate) { > size_t initial = (size_t)(non_taxable_bytes * tax_rate) >> LogHeapWordSize; > STATIC_ASSERT(sizeof(size_t) <= sizeof(intptr_t)); > Atomic::xchg((intptr_t)initial, &_budget); > Atomic::store(tax_rate, &_tax_rate); > Atomic::inc(&_epoch); > > Note: the xchg is conservative, the store is plain. The xchg value should be > visible before the store. Thanks for explaining this. I see your point now. For memory_order_conservative order, looks like that ppc enforced an order which is stronger than aarch64. ppc issues two full memory barriers: one before the loop and one after the loop. But for aarch64, the preceding load/store can still floating after the first ldxr instruction : .L2: ldxr x2, [x1] add x2, x2, x0 stlxr w3, x2, [x1] cbnz w3, .L2 dmb ish So my question is: for "two-way memory barrier", do we need another full barrier before the loop? Felix From felix.yang at huawei.com Tue Nov 12 08:37:02 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Tue, 12 Nov 2019 08:37:02 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> Message-ID: > -----Original Message----- > From: Yangfei (Felix) > Sent: Tuesday, November 12, 2019 10:58 AM > To: 'Andrew Haley' ; aarch64-port-dev at openjdk.java.net > Cc: hotspot-dev at openjdk.java.net > Subject: RE: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic > operations > > > On 11/11/19 3:05 PM, Andrew Haley wrote: > > > And finally, is there any operation in HotSpot that actually > > > requires such strong memory semantics? Probably not, but no-one has > > > ever been brave enough to say so. > > > > Here's a place where it really does matter. > > > > void ShenandoahPacer::restart_with(size_t non_taxable_bytes, double > > tax_rate) { > > size_t initial = (size_t)(non_taxable_bytes * tax_rate) >> > LogHeapWordSize; > > STATIC_ASSERT(sizeof(size_t) <= sizeof(intptr_t)); > > Atomic::xchg((intptr_t)initial, &_budget); > > Atomic::store(tax_rate, &_tax_rate); > > Atomic::inc(&_epoch); > > > > Note: the xchg is conservative, the store is plain. The xchg value > > should be visible before the store. > > Thanks for explaining this. I see your point now. > For memory_order_conservative order, looks like that ppc enforced an order > which is stronger than aarch64. > ppc issues two full memory barriers: one before the loop and one after the > loop. > But for aarch64, the preceding load/store can still floating after the first ldxr > instruction : > > .L2: > ldxr x2, [x1] > add x2, x2, x0 > stlxr w3, x2, [x1] > cbnz w3, .L2 > dmb ish > > So my question is: for "two-way memory barrier", do we need another full > barrier before the loop? This has been discussed somewhere before: https://patchwork.kernel.org/patch/3575821/ Let's keep the current status for safe. Felix From thomas.schatzl at oracle.com Tue Nov 12 09:17:16 2019 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 12 Nov 2019 10:17:16 +0100 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range Message-ID: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> Hi all, I would like to introduce a small helper function to clamp a given value between a min/max value. This would unclutter a few MIN(MAX(value, ), ) statements for imho better readability. There are two places in (non-CMS) code remaining with the above statement, because in these cases it happens that a value min > max is passed, i.e. you potentially (already) get returned unexpected values. (I did remove that assert in this webrev) These are in methodData.cpp: int MethodData::compute_extra_data_count(int data_size, int empty_bc_count, bool needs_speculative_traps) { 933: int extra_data_count = MIN2(empty_bc_count, MAX2(4, (empty_bc_count * 30) / 100)); hashtable.cpp: template BasicHashtableEntry* BasicHashtable::new_entry(unsigned int hashValue) { 64: int block_size = MIN2(512, MAX2((int)_table_size / 2, (int)_number_of_entries)); I would like to ask the responsible teams (compiler and runtime) to give an opinion on these cases, i.e. if these should be converted (these are intentional) or I should file an RFE to investigate further. CR: https://bugs.openjdk.java.net/browse/JDK-8233702 Webrev: http://cr.openjdk.java.net/~tschatzl/8233702/webrev/ Testing: hs-tier1-5 Thanks, Thomas From aph at redhat.com Tue Nov 12 09:25:18 2019 From: aph at redhat.com (Andrew Haley) Date: Tue, 12 Nov 2019 09:25:18 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> Message-ID: <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> On 11/12/19 8:37 AM, Yangfei (Felix) wrote: > This has been discussed somewhere before: https://patchwork.kernel.org/patch/3575821/ > Let's keep the current status for safe. Yes. It's been interesting to see the progress of this patch. I don't think it's the first time that someone has been tempted to change this code to make it "more efficient". I wonder if we could perhaps add a comment to that code so that it doesn't happen again. I'm not sure exactly what the patch should say beyond "do not touch". Perhaps something along the lines of "Do not touch this code unless you have at least Black Belt, 4th Dan in memory ordering." :-) More seriously, maybe simply "Note that memory_order_conservative requires a full barrier after atomic stores. See https://patchwork.kernel.org/patch/3575821/" -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From adinn at redhat.com Tue Nov 12 09:42:09 2019 From: adinn at redhat.com (Andrew Dinn) Date: Tue, 12 Nov 2019 09:42:09 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> Message-ID: <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> On 12/11/2019 09:25, Andrew Haley wrote: > On 11/12/19 8:37 AM, Yangfei (Felix) wrote: >> This has been discussed somewhere before: https://patchwork.kernel.org/patch/3575821/ >> Let's keep the current status for safe. > > Yes. > > It's been interesting to see the progress of this patch. I don't think > it's the first time that someone has been tempted to change this code > to make it "more efficient". > > I wonder if we could perhaps add a comment to that code so that it > doesn't happen again. I'm not sure exactly what the patch should say > beyond "do not touch". Perhaps something along the lines of "Do not > touch this code unless you have at least Black Belt, 4th Dan in memory > ordering." :-) > > More seriously, maybe simply "Note that memory_order_conservative > requires a full barrier after atomic stores. See > https://patchwork.kernel.org/patch/3575821/" Yes, that would be a help. It's particularly easy to get confused here because we happily omit the ordering of an stlr store wrt subsequent stores when the strl is implementing a Java volatile write or a Java cmpxchg. So, it might be worth adding a rider that implementing the full memory_order_conservative semantics is necessary because VM code relies on the strong ordering wrt writes that the cmpxchg is required to provide. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill From Joshua.Zhu at arm.com Tue Nov 12 09:55:02 2019 From: Joshua.Zhu at arm.com (Joshua Zhu (Arm Technology China)) Date: Tue, 12 Nov 2019 09:55:02 +0000 Subject: RFR: 8233948: AArch64: Incorrect mapping between OptoReg and VMReg for high 64 bits of Vector Register In-Reply-To: References: Message-ID: Hi, Please review the following patch: JBS: https://bugs.openjdk.java.net/browse/JDK-8233948 Webrev: http://cr.openjdk.java.net/~jzhu/8233948/webrev.00/ In register definition of aarch64.ad, each vector register is defined as 4 slots with its calling convention, ideal type, ... and its VMReg value. These VMReg values in reg_def are used by ADLC to generate mapping between OptoReg and VMReg: opto2vm[]. But VMReg is treated as 2 slots inconsistently for vector register [1]. This causes incorrect mapping between VMReg and OptoReg for high 64 bits of vector register. If we write the following codes which will access high 64 bits of vector register in a way like vector_calling_convention in panama branch [2]: VMReg vmreg = v0->as_VMReg(); VMRegPair p; p.set_pair(vmreg->next(3), vmreg); And convert the VMRegPair into OptoReg [3]: Regmask rm; OptoReg::Name reg_fst = OptoReg::as_OptoReg(p.first()); OptoReg::Name reg_snd = OptoReg::as_OptoReg(p.second()); tty->print("fst=%d snd=%d\n", reg_fst, reg_snd); for (OptoReg::Name r = reg_fst; r <= reg_snd; r++) { rm->Insert(r); } In this case, for V0's VMRegPair, first VMReg's value is 64 and second one is 67. After conversion by as_OptoReg(), first OptoReg becomes 124 and second one becomes 129. Then totally 6 bits of RegMask are set incorrectly, should be 4 bits (represent 4 slots/halves). VMReg, opto2vm[] and vm2opto[] are dumped by [4] as below for reference: http://cr.openjdk.java.net/~jzhu/8233948/RegDump_before_change.log opto2vm[] has the following items: OptoReg: 126, VMReg: 66 OptoReg: 127, VMReg: 67 OptoReg: 128, VMReg: 66 OptoReg: 129, VMReg: 67 OptoReg pair [126, 127] and [128, 129] are both mapped to the same VMReg Pair [66, 67]. vm2opto are then generated by traverse of opto2vm [5]. VMReg: 66, OptoReg: 128 VMReg: 67, OptoReg: 129 This caused incorrect RegMask generated in above case. However for floating-point register, bottom 64 bits of NEON vector register overlaps with floating-point register. Their VMReg and corresponding mapping is still consistent, therefore this issue is not exposed. But I think we should still fix it to make the codes clean and avoid potential issue in future. After fix, the dump is: http://cr.openjdk.java.net/~jzhu/8233948/RegDump_after_change.log [1] https://hg.openjdk.java.net/jdk/jdk/file/d595f1faace2/src/hotspot/cpu/aarch64/vmreg_aarch64.inline.hpp#l35 [2] https://hg.openjdk.java.net/panama/dev/file/43bc39c09590/src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp#l1140 [3] https://hg.openjdk.java.net/panama/dev/file/43bc39c09590/src/hotspot/share/opto/matcher.cpp#l1360 [4] http://cr.openjdk.java.net/~jzhu/8233948/dump.patch [5] https://hg.openjdk.java.net/jdk/jdk/file/d595f1faace2/src/hotspot/share/opto/c2compiler.cpp#l59 Best Regards, Joshua From felix.yang at huawei.com Tue Nov 12 12:02:34 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Tue, 12 Nov 2019 12:02:34 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> Message-ID: > -----Original Message----- > From: Andrew Dinn [mailto:adinn at redhat.com] > Sent: Tuesday, November 12, 2019 5:42 PM > To: Andrew Haley ; Yangfei (Felix) > ; aarch64-port-dev at openjdk.java.net > Cc: hotspot-dev at openjdk.java.net > Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic > operations > > On 12/11/2019 09:25, Andrew Haley wrote: > > On 11/12/19 8:37 AM, Yangfei (Felix) wrote: > >> This has been discussed somewhere before: > >> https://patchwork.kernel.org/patch/3575821/ > >> Let's keep the current status for safe. > > > > Yes. > > > > It's been interesting to see the progress of this patch. I don't think > > it's the first time that someone has been tempted to change this code > > to make it "more efficient". > > > > I wonder if we could perhaps add a comment to that code so that it > > doesn't happen again. I'm not sure exactly what the patch should say > > beyond "do not touch". Perhaps something along the lines of "Do not > > touch this code unless you have at least Black Belt, 4th Dan in memory > > ordering." :-) > > > > More seriously, maybe simply "Note that memory_order_conservative > > requires a full barrier after atomic stores. See > > https://patchwork.kernel.org/patch/3575821/" > Yes, that would be a help. It's particularly easy to get confused here because > we happily omit the ordering of an stlr store wrt subsequent stores when the > strl is implementing a Java volatile write or a Java cmpxchg. > > So, it might be worth adding a rider that implementing the full > memory_order_conservative semantics is necessary because VM code relies > on the strong ordering wrt writes that the cmpxchg is required to provide. > I also suggest we implement these functions with inline assembly here. For Atomic::PlatformXchg, we may issue two consecutive full memory barriers with the current status. I used GCC 7.3.0 to compile the following function: $ cat test.c long foo(long add_value, long volatile* dest, long exchange_value) { long val = __sync_lock_test_and_set(dest, exchange_value); __sync_synchronize(); return val; } $ cat test.s .arch armv8-a .file "test.c" .text .align 2 .p2align 3,,7 .global foo .type foo, %function foo: .L2: ldxr x0, [x1] stxr w3, x2, [x1] cbnz w3, .L2 dmb ish < ======== dmb ish < ======== ret .size foo, .-foo .ident "GCC: (GNU) 7.3.0" .section .note.GNU-stack,"", at progbits From felix.yang at huawei.com Tue Nov 12 12:14:48 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Tue, 12 Nov 2019 12:14:48 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> Message-ID: > -----Original Message----- > From: Yangfei (Felix) > Sent: Tuesday, November 12, 2019 8:03 PM > To: 'Andrew Dinn' ; Andrew Haley ; > aarch64-port-dev at openjdk.java.net > Cc: hotspot-dev at openjdk.java.net > Subject: RE: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic > operations > > > -----Original Message----- > > From: Andrew Dinn [mailto:adinn at redhat.com] > > Sent: Tuesday, November 12, 2019 5:42 PM > > To: Andrew Haley ; Yangfei (Felix) > > ; aarch64-port-dev at openjdk.java.net > > Cc: hotspot-dev at openjdk.java.net > > Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of > > atomic operations > > > > On 12/11/2019 09:25, Andrew Haley wrote: > > > On 11/12/19 8:37 AM, Yangfei (Felix) wrote: > > >> This has been discussed somewhere before: > > >> https://patchwork.kernel.org/patch/3575821/ > > >> Let's keep the current status for safe. > > > > > > Yes. > > > > > > It's been interesting to see the progress of this patch. I don't > > > think it's the first time that someone has been tempted to change > > > this code to make it "more efficient". > > > > > > I wonder if we could perhaps add a comment to that code so that it > > > doesn't happen again. I'm not sure exactly what the patch should say > > > beyond "do not touch". Perhaps something along the lines of "Do not > > > touch this code unless you have at least Black Belt, 4th Dan in > > > memory ordering." :-) > > > > > > More seriously, maybe simply "Note that memory_order_conservative > > > requires a full barrier after atomic stores. See > > > https://patchwork.kernel.org/patch/3575821/" > > Yes, that would be a help. It's particularly easy to get confused here > > because we happily omit the ordering of an stlr store wrt subsequent > > stores when the strl is implementing a Java volatile write or a Java cmpxchg. > > > > So, it might be worth adding a rider that implementing the full > > memory_order_conservative semantics is necessary because VM code > > relies on the strong ordering wrt writes that the cmpxchg is required to > provide. > > > > I also suggest we implement these functions with inline assembly here. > For Atomic::PlatformXchg, we may issue two consecutive full memory barriers > with the current status. > I used GCC 7.3.0 to compile the following function: > > $ cat test.c > long foo(long add_value, long volatile* dest, long exchange_value) { > long val = __sync_lock_test_and_set(dest, exchange_value); > > __sync_synchronize(); > > return val; > } > > $ cat test.s > .arch armv8-a > .file "test.c" > .text > .align 2 > .p2align 3,,7 > .global foo > .type foo, %function > foo: > .L2: > ldxr x0, [x1] > stxr w3, x2, [x1] > cbnz w3, .L2 > dmb ish < ======== > dmb ish < ======== > ret > .size foo, .-foo > .ident "GCC: (GNU) 7.3.0" > .section .note.GNU-stack,"", at progbits Also this is different from the following sequence (stxr instead of stlxr). // atomic_op (B) 1: ldxr x0, [B] // Exclusive load stlxr w1, x0, [B] // Exclusive store with release cbnz w1, 1b dmb ish // Full barrier I think the two-way memory barrier may not be ensured for this case. Felix From aph at redhat.com Tue Nov 12 16:04:57 2019 From: aph at redhat.com (Andrew Haley) Date: Tue, 12 Nov 2019 16:04:57 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> Message-ID: <58ba3a50-fe49-f231-85b2-37d8f8b136f0@redhat.com> On 11/12/19 12:02 PM, Yangfei (Felix) wrote: > I also suggest we implement these functions with inline assembly here. Please let's not. Long term it would be nice to migrate all of HotSpot from the current inline hackery to real C++ atomics. There has been a considerable effort to make C++ and Java memory models compatible, and we should utilize this. > For Atomic::PlatformXchg, we may issue two consecutive full memory > barriers with the current status. OK, but is this actually important? What uses it? -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From erik.osterlund at oracle.com Tue Nov 12 17:38:11 2019 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Tue, 12 Nov 2019 18:38:11 +0100 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> Message-ID: <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> Hi Felix, I was hoping to stay out of this conversation, but couldn't resist butting in unfortunately. I have to agree with you - you are absolutely right. We have a mix of the JMM, the C++ memory model and HotSpot's memory model, which predates that. JMM and C++ memory model are indeed quite similar now in terms of semantics (yet there exists choice in implementation of it), but the old memory model used in HotSpot is kind of not similar. Ideally we would have less memory models and just go with the one used by C++/JMM, and then we just have to convince ourselves that the choice of implementation of seq_cst by the compiler is compatible to the one we use to implement the JMM in our JIT-compiled code. But it seems to me that we are not there. Last time I discussed this with Andrew Haley, we disagreed and didn't really get anywhere. Andrew wanted to use the GCC intrinsics, and I was arguing that we should use inline assembly as a) the memory model we are supporting is not the same as what the intrinsic is providing, and b) we are relying on the implementation of the intrinsics to emit very specific instruction sequences to be compatible with the memory model, and it would be more clear if we could see in the inline assembly that we indeed used exactly those instructions that we expected and not something unexpected, which we would only randomly find out when disassembling the code (ahem). Now it looks like you have discovered that we sometimes have double trailing dmb ish, and sometimes lacking leading dmb ish if I am reading this right. That seems to make the case stronger, that by looking at the intrinsic calls, it's not obvious what instruction sequence will be emitted, and whether that is compatible with the memory model it is implementing or not, and you really have to disassemble it to find out what we actually got. And it looks like what we got is not at all what we wanted. My hope is that the AArch64 port should use inline assembly as you suggest, so we can see that the generated code is correct, as we wait for the glorious future where all HotSpot code has been rewritten to work with seq_cst (and we are *not* there now). Having said that, now I will try to go and hide in a corner again... Thanks, /Erik On 2019-11-12 13:14, Yangfei (Felix) wrote: >> -----Original Message----- >> From: Yangfei (Felix) >> Sent: Tuesday, November 12, 2019 8:03 PM >> To: 'Andrew Dinn' ; Andrew Haley ; >> aarch64-port-dev at openjdk.java.net >> Cc: hotspot-dev at openjdk.java.net >> Subject: RE: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic >> operations >> >>> -----Original Message----- >>> From: Andrew Dinn [mailto:adinn at redhat.com] >>> Sent: Tuesday, November 12, 2019 5:42 PM >>> To: Andrew Haley ; Yangfei (Felix) >>> ; aarch64-port-dev at openjdk.java.net >>> Cc: hotspot-dev at openjdk.java.net >>> Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of >>> atomic operations >>> >>> On 12/11/2019 09:25, Andrew Haley wrote: >>>> On 11/12/19 8:37 AM, Yangfei (Felix) wrote: >>>>> This has been discussed somewhere before: >>>>> https://patchwork.kernel.org/patch/3575821/ >>>>> Let's keep the current status for safe. >>>> Yes. >>>> >>>> It's been interesting to see the progress of this patch. I don't >>>> think it's the first time that someone has been tempted to change >>>> this code to make it "more efficient". >>>> >>>> I wonder if we could perhaps add a comment to that code so that it >>>> doesn't happen again. I'm not sure exactly what the patch should say >>>> beyond "do not touch". Perhaps something along the lines of "Do not >>>> touch this code unless you have at least Black Belt, 4th Dan in >>>> memory ordering." :-) >>>> >>>> More seriously, maybe simply "Note that memory_order_conservative >>>> requires a full barrier after atomic stores. See >>>> https://patchwork.kernel.org/patch/3575821/" >>> Yes, that would be a help. It's particularly easy to get confused here >>> because we happily omit the ordering of an stlr store wrt subsequent >>> stores when the strl is implementing a Java volatile write or a Java cmpxchg. >>> >>> So, it might be worth adding a rider that implementing the full >>> memory_order_conservative semantics is necessary because VM code >>> relies on the strong ordering wrt writes that the cmpxchg is required to >> provide. >> I also suggest we implement these functions with inline assembly here. >> For Atomic::PlatformXchg, we may issue two consecutive full memory barriers >> with the current status. >> I used GCC 7.3.0 to compile the following function: >> >> $ cat test.c >> long foo(long add_value, long volatile* dest, long exchange_value) { >> long val = __sync_lock_test_and_set(dest, exchange_value); >> >> __sync_synchronize(); >> >> return val; >> } >> >> $ cat test.s >> .arch armv8-a >> .file "test.c" >> .text >> .align 2 >> .p2align 3,,7 >> .global foo >> .type foo, %function >> foo: >> .L2: >> ldxr x0, [x1] >> stxr w3, x2, [x1] >> cbnz w3, .L2 >> dmb ish < ======== >> dmb ish < ======== >> ret >> .size foo, .-foo >> .ident "GCC: (GNU) 7.3.0" >> .section .note.GNU-stack,"", at progbits > Also this is different from the following sequence (stxr instead of stlxr). > > > > // atomic_op (B) > 1: ldxr x0, [B] // Exclusive load > > stlxr w1, x0, [B] // Exclusive store with release > cbnz w1, 1b > dmb ish // Full barrier > > > > I think the two-way memory barrier may not be ensured for this case. > > Felix From aph at redhat.com Tue Nov 12 19:00:20 2019 From: aph at redhat.com (Andrew Haley) Date: Tue, 12 Nov 2019 19:00:20 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> Message-ID: <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> On 11/12/19 5:38 PM, Erik ?sterlund wrote: > My hope is that the AArch64 port should use inline assembly as you suggest, so we can see that the generated code is correct, as we wait for the glorious future where all HotSpot code has been rewritten to work with seq_cst (and we are *not* there now). I don't doubt it. :-) But my arguments about the C++ intrinsics being well-enough defined, at least on AArch64 Linux, have not changed, and I'm not going to argue all that again. I'll grant you that there may well be issues on various x86 compilers, but that isn't relevant here. > Now it looks like you have discovered that we sometimes have double trailing dmb ish, and sometimes lacking leading dmb ish if I am reading this right. That seems to make the case stronger, Sure, we can use inline asm if there's no other way to do it, but I don't think that's necessary. All we need is to use T res; __atomic_exchange(dest, &exchange_value, &res, __ATOMIC_RELEASE); FULL_MEM_BARRIER; -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From ecki at zusammenkunft.net Tue Nov 12 21:13:32 2019 From: ecki at zusammenkunft.net (Bernd Eckenfels) Date: Tue, 12 Nov 2019 21:13:32 +0000 Subject: Itlb_multihit Intel mitigation and JVM Message-ID: Hello, With the latest Linux kernel updates a mitigation for the Intel MCE Problem with multiple iTLB page sizes hit the kernel. KVM Hypervisor will make large pages non executable and split them down to 4K pages if they are fetched for execution. https://www.phoronix.com/scan.php?page=news_item&px=iITLB-Multihit-TAA-Kernel-Code The mitigation on the host is visible here /sys/devices/system/cpu/vulnerabilities/itlb_multihit Can be controlled with bootflag kvm.nx_huge_pages=off I researched a bit, it should show up as a split counter nx_largepages_splitted in kvm stats of debugfs on the Hypervisor. I wonder if anybody did already tests with the JVM under which conditions a JVM running in a KVM Hypervisor will trigger those page splits and suffer from it. As I understand this would require .text or codecache segments to be large pages. Is this triggered by the JVM, does it for example use transparent HP with madvice (only) on? Does anybody have studied the impact on KVM Hypervisor and how are the other virtualization solutions protecting against this and holding up (for a mostly JVM based workload). Gruss Bernd -- http://bernd.eckenfels.net From ecki at zusammenkunft.net Tue Nov 12 21:20:45 2019 From: ecki at zusammenkunft.net (Bernd Eckenfels) Date: Tue, 12 Nov 2019 21:20:45 +0000 Subject: Intel MCU with JCC erratum Message-ID: Intel ships microcode updates for many of their processors fixing a instability in jump predictions, called the JCC erratum. https://www.phoronix.com/scan.php?page=article&item=intel-jcc-microcode&num=1 It seems to affect jump-happy Usermode applications, and I wonder what performance impact those updates will have on JVM workloads. And as a extension of this, the toolchain improvements Intel is working on to avoid the problematic jumps, will this also find its way in the hotspot compilers (and the OpenJDK built toolchain) Gruss Bernd -- http://bernd.eckenfels.net From coleen.phillimore at oracle.com Wed Nov 13 00:18:35 2019 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Tue, 12 Nov 2019 19:18:35 -0500 Subject: RFR (M) 8233913: Remove implicit conversion from Method* to methodHandle In-Reply-To: References: Message-ID: <051f0934-8f05-3b44-d125-4045a3ff06d1@oracle.com> We meant to send this to the mailing list. Coleen On 11/12/19 2:44 PM, coleen.phillimore at oracle.com wrote: > > > On 11/12/19 1:28 PM, Ioi Lam wrote: >> Hi Coleen, >> >> I see some replacement of methodHandle to Method*. >> >> Is there a general rule about when we can use a Method*, and when we >> must use a methodHandle? I can only find this in the comments >> >> // Metadata Handles.? Unlike oop Handles these are needed to prevent >> metadata >> // from being reclaimed by RedefineClasses. >> // Metadata Handles should be passed around as const references to >> avoid copy construction >> // and destruction for parameters. >> >> but it's not clear when RedefineClasses can happen. > > RedefineClasses can only happen at a safepoint. This is never going to > change to handshakes even though some people want everything to be > handshakes :)?? Also, while classfile parsing and loading, we're not > going to be able to redefine the class, so the Method*/ConstantPool* > are safe there, but I didn't want the code to have the distinction, > because that's hard, so there are some > methodHandles/constantPoolHandles there too.?? Most are left over from > Permgen days because they could move. > > At one point, I was going to write a CheckUnhandledMetadata code but > it's not as easy as the oop code because Method* isn't a typedef that > I can conveniently make into a class, like we did with oop. > > One thing we did with these handles long ago, was to change all the > parameters to take Handles rather than the oops so that we could > Handleize something as high up the call stack as possible, and then > not have to rehandle them.? So that's what I did here. > > Some places I went the other way like the print functions because the > callers didn't have a Handle already (they got the Method* out of some > data structure). > > And I changed the return types to have Method* because these were also > places that came out of some data structure (eg. > InstanceKlass::find_method()) so we'd have to call Thread::current to > turn it into a methodHandle then pass it back up. > > So I guess the general rule is to Handleize things at the top of the > call stack and pass them down, but not at the bottom of the call stack > and pass them up.?? Where your stacks grow down. > > I don't think it's 100% consistent yet, but I'm trying to get closer. > > Coleen >> >> Thanks >> - Ioi >> >> On 11/11/19 11:59 AM, coleen.phillimore at oracle.com wrote: >>> Summary: Fix call sites to use existing THREAD local or pass down >>> THREAD local for shallower callsites. Make linkResolver methods >>> return Method* for caller to handleize if needed. >>> >>> There are a small number of changes to several files, mostly >>> obvious.? Some comments on a few of the specific changes: >>> >>> http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/aot/aotCompiledMethod.cpp.udiff.html >>> >>> http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/code/nmethod.cpp.udiff.html >>> >>> The comment and the methodHandle don't make sense since there'a s >>> NSV there.? Method* will not be reclaimed ever, and it doesn't >>> move.? There might have been a safepoint here once. >>> >>> http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/ci/ciMethod.cpp.udiff.html >>> >>> If you have a methodHandle, you don't need to do >>> mh()->max_stack().?? The -> operator will expose the underlying >>> Method*. >>> >>> http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/classfile/javaClasses.cpp.udiff.html >>> >>> The Method* in the vframeStream is safe here because the stack frame >>> are followed in case of safepoint, and will mark the Method* as >>> live, so this was unnecessary. >>> >>> http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/compiler/tieredThresholdPolicy.hpp.udiff.html >>> >>> I changed several of the TieredThresholdPolicy functions to take >>> const methodHandle as a parameter to avoid unhandleizing and >>> rehandleizing, and avoid Thread::current() calls. >>> >>> http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev/src/hotspot/share/interpreter/linkResolver.hpp.udiff.html >>> >>> I changed LinkResolver methods to return Method* to avoid >>> unnecessary handlizing.? The handle copy is elided by most compilers >>> but it was still not needed by many callers. >>> >>> Tested with tier1 on all Oracle platforms, and tier 2-3 on linux-x64. >>> >>> I also performance tested this with slight avg 0.5% improvement, and >>> fewer instructions: >>> >>> eg: PerfStartup-Noop instructions on linux-x64 (before/after) >>> >>> 0.49% >>> 149943356.15 >>> ? 262156.46 >>> ????149213135.00 >>> ? 281141.80 >>> p = 0.000 >>> >>> >>> >>> open webrev at >>> http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev >>> bug link https://bugs.openjdk.java.net/browse/JDK-8233913 >>> >>> thanks, >>> Coleen >>> >>> >>> >>> >>> >>> >>> >> > From felix.yang at huawei.com Wed Nov 13 02:35:39 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Wed, 13 Nov 2019 02:35:39 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> Message-ID: > -----Original Message----- > From: Andrew Haley [mailto:aph at redhat.com] > Sent: Wednesday, November 13, 2019 3:00 AM > To: Erik ?sterlund ; Yangfei (Felix) > ; Andrew Dinn ; > aarch64-port-dev at openjdk.java.net > Cc: hotspot-dev at openjdk.java.net > Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic > operations > > On 11/12/19 5:38 PM, Erik ?sterlund wrote: > > My hope is that the AArch64 port should use inline assembly as you suggest, > so we can see that the generated code is correct, as we wait for the glorious > future where all HotSpot code has been rewritten to work with seq_cst (and we > are *not* there now). > > I don't doubt it. :-) > > But my arguments about the C++ intrinsics being well-enough defined, at least > on AArch64 Linux, have not changed, and I'm not going to argue all that again. > I'll grant you that there may well be issues on various x86 compilers, but that > isn't relevant here. Looks like I reignited an old discussion :- ) > > Now it looks like you have discovered that we sometimes have double > > trailing dmb ish, and sometimes lacking leading dmb ish if I am > > reading this right. That seems to make the case stronger, > > Sure, we can use inline asm if there's no other way to do it, but I don't think > that's necessary. All we need is to use > > T res; > __atomic_exchange(dest, &exchange_value, &res, __ATOMIC_RELEASE); > FULL_MEM_BARRIER; > When we go the C++ intrinsics way, we should also handle Atomic::PlatformCmpxchg. When I compile the following function with GCC 4.9.3: long foo(long exchange_value, long volatile* dest, long compare_value) { long val = __sync_val_compare_and_swap(dest, compare_value, exchange_value); return val; } I got: .L2: ldaxr x0, [x1] cmp x0, x2 bne .L3 stlxr w4, x3, [x1] cbnz w4, .L2 .L3: Proposed patch: diff -r 846fee5ea75e src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp --- a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Wed Nov 13 10:27:06 2019 +0900 +++ b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Wed Nov 13 10:14:58 2019 +0800 @@ -52,7 +52,7 @@ T volatile* dest, atomic_memory_order order) const { STATIC_ASSERT(byte_size == sizeof(T)); - T res = __sync_lock_test_and_set(dest, exchange_value); + T res = __atomic_exchange_n(dest, exchange_value, __ATOMIC_RELEASE); FULL_MEM_BARRIER; return res; } @@ -70,7 +70,11 @@ __ATOMIC_RELAXED, __ATOMIC_RELAXED); return value; } else { - return __sync_val_compare_and_swap(dest, compare_value, exchange_value); + T value = compare_value; + __atomic_compare_exchange(dest, &value, &exchange_value, /*weak*/false, + __ATOMIC_RELEASE, __ATOMIC_RELAXED); + FULL_MEM_BARRIER; + return value; } } From kim.barrett at oracle.com Wed Nov 13 06:11:59 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 13 Nov 2019 01:11:59 -0500 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range In-Reply-To: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> References: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> Message-ID: <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> > On Nov 12, 2019, at 4:17 AM, Thomas Schatzl wrote: > > Hi all, > > I would like to introduce a small helper function to clamp a given value between a min/max value. > This would unclutter a few MIN(MAX(value, ), ) statements for imho better readability. > > There are two places in (non-CMS) code remaining with the above statement, because in these cases it happens that a value min > max is passed, i.e. you potentially (already) get returned unexpected values. > (I did remove that assert in this webrev) I think the clamp function should be asserting min <= max. I haven?t reviewed all of the changed uses yet, so not yet a review. > > These are in > > methodData.cpp: > > int MethodData::compute_extra_data_count(int data_size, int empty_bc_count, bool needs_speculative_traps) { > > 933: int extra_data_count = MIN2(empty_bc_count, MAX2(4, (empty_bc_count * 30) / 100)); > > > hashtable.cpp: > > template BasicHashtableEntry* BasicHashtable::new_entry(unsigned int hashValue) { > > 64: int block_size = MIN2(512, MAX2((int)_table_size / 2, (int)_number_of_entries)); > > I would like to ask the responsible teams (compiler and runtime) to give an opinion on these cases, i.e. if these should be converted (these are intentional) or I should file an RFE to investigate further. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8233702 > Webrev: > http://cr.openjdk.java.net/~tschatzl/8233702/webrev/ > Testing: > hs-tier1-5 > > Thanks, > Thomas From john.r.rose at oracle.com Wed Nov 13 06:23:09 2019 From: john.r.rose at oracle.com (John Rose) Date: Tue, 12 Nov 2019 22:23:09 -0800 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range In-Reply-To: <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> References: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> Message-ID: On Nov 12, 2019, at 10:11 PM, Kim Barrett wrote: > >> On Nov 12, 2019, at 4:17 AM, Thomas Schatzl > wrote: >> >> Hi all, >> >> I would like to introduce a small helper function to clamp a given value between a min/max value. >> This would unclutter a few MIN(MAX(value, ), ) statements for imho better readability. >> >> There are two places in (non-CMS) code remaining with the above statement, because in these cases it happens that a value min > max is passed, i.e. you potentially (already) get returned unexpected values. >> (I did remove that assert in this webrev) > > I think the clamp function should be asserting min <= max. I agree. Overall I like this change. I hope the remaining odd cases can be recoded as clamps, because it?s helpful when reasoning about the code to see which values are limits and which is the variable to be kept within the limits. The min/max idioms obscure that distinction. From tobias.hartmann at oracle.com Wed Nov 13 07:59:53 2019 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 13 Nov 2019 08:59:53 +0100 Subject: RFR (M) 8233913: Remove implicit conversion from Method* to methodHandle In-Reply-To: References: Message-ID: <4804796f-65d7-f1c0-2bd2-7975dd5979a2@oracle.com> Hi Coleen, On 11.11.19 20:59, coleen.phillimore at oracle.com wrote: > open webrev at http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev > bug link https://bugs.openjdk.java.net/browse/JDK-8233913 Changes to compiler specific files look good to me. Best regards, Tobias From erik.osterlund at oracle.com Wed Nov 13 08:38:25 2019 From: erik.osterlund at oracle.com (=?UTF-8?Q?Erik_=c3=96sterlund?=) Date: Wed, 13 Nov 2019 09:38:25 +0100 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> Message-ID: <722749eb-12f5-16d7-f498-4147a2d32cd9@oracle.com> Hi Andrew, On 2019-11-12 20:00, Andrew Haley wrote: > But my arguments about the C++ intrinsics being well-enough defined, > at least on AArch64 Linux, have not changed, and I'm not going to > argue all that again. I'll grant you that there may well be issues on > various x86 compilers, but that isn't relevant here. I also do not want to revive that discussion at this time. So I'm just going to note the way we think about this is... intrinsically different. With that said, I believe my work here is done. Intrinsic puzzle away. ;-) /Erik From felix.yang at huawei.com Wed Nov 13 08:36:41 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Wed, 13 Nov 2019 08:36:41 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> Message-ID: > -----Original Message----- > From: Yangfei (Felix) > Sent: Wednesday, November 13, 2019 10:36 AM > To: 'Andrew Haley' ; Erik ?sterlund > ; Andrew Dinn ; > aarch64-port-dev at openjdk.java.net > Cc: hotspot-dev at openjdk.java.net > Subject: RE: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic > operations > > > -----Original Message----- > > From: Andrew Haley [mailto:aph at redhat.com] > > Sent: Wednesday, November 13, 2019 3:00 AM > > To: Erik ?sterlund ; Yangfei (Felix) > > ; Andrew Dinn ; > > aarch64-port-dev at openjdk.java.net > > Cc: hotspot-dev at openjdk.java.net > > Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of > > atomic operations > > > > On 11/12/19 5:38 PM, Erik ?sterlund wrote: > > > My hope is that the AArch64 port should use inline assembly as you > > > suggest, > > so we can see that the generated code is correct, as we wait for the > > glorious future where all HotSpot code has been rewritten to work with > > seq_cst (and we are *not* there now). > > > > I don't doubt it. :-) > > > > But my arguments about the C++ intrinsics being well-enough defined, > > at least on AArch64 Linux, have not changed, and I'm not going to argue all > that again. > > I'll grant you that there may well be issues on various x86 compilers, > > but that isn't relevant here. > > Looks like I reignited an old discussion :- ) > > > > Now it looks like you have discovered that we sometimes have double > > > trailing dmb ish, and sometimes lacking leading dmb ish if I am > > > reading this right. That seems to make the case stronger, > > > > Sure, we can use inline asm if there's no other way to do it, but I > > don't think that's necessary. All we need is to use > > > > T res; > > __atomic_exchange(dest, &exchange_value, &res, __ATOMIC_RELEASE); > > FULL_MEM_BARRIER; > > > > When we go the C++ intrinsics way, we should also handle > Atomic::PlatformCmpxchg. > When I compile the following function with GCC 4.9.3: > > long foo(long exchange_value, long volatile* dest, long compare_value) { > long val = __sync_val_compare_and_swap(dest, compare_value, > exchange_value); > return val; > } > > I got: > > .L2: > ldaxr x0, [x1] > cmp x0, x2 > bne .L3 > stlxr w4, x3, [x1] > cbnz w4, .L2 > .L3: > > > Proposed patch: > diff -r 846fee5ea75e > src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp > --- a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Wed Nov 13 > 10:27:06 2019 +0900 > +++ b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Wed Nov > +++ 13 10:14:58 2019 +0800 > @@ -52,7 +52,7 @@ > T volatile* > dest, > > atomic_memory_order order) const { > STATIC_ASSERT(byte_size == sizeof(T)); > - T res = __sync_lock_test_and_set(dest, exchange_value); > + T res = __atomic_exchange_n(dest, exchange_value, __ATOMIC_RELEASE); > FULL_MEM_BARRIER; > return res; > } > @@ -70,7 +70,11 @@ > __ATOMIC_RELAXED, > __ATOMIC_RELAXED); > return value; > } else { > - return __sync_val_compare_and_swap(dest, compare_value, > exchange_value); > + T value = compare_value; > + __atomic_compare_exchange(dest, &value, &exchange_value, > /*weak*/false, > + __ATOMIC_RELEASE, > __ATOMIC_RELAXED); > + FULL_MEM_BARRIER; > + return value; > } > } Still not strong enough? considering the first of ldxr of the loop may be speculated. v2 patch: diff -r 846fee5ea75e src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp --- a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Wed Nov 13 10:27:06 2019 +0900 +++ b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Wed Nov 13 16:33:16 2019 +0800 @@ -52,7 +52,7 @@ T volatile* dest, atomic_memory_order order) const { STATIC_ASSERT(byte_size == sizeof(T)); - T res = __sync_lock_test_and_set(dest, exchange_value); + T res = __atomic_exchange_n(dest, exchange_value, __ATOMIC_RELEASE); FULL_MEM_BARRIER; return res; } @@ -70,7 +70,12 @@ __ATOMIC_RELAXED, __ATOMIC_RELAXED); return value; } else { - return __sync_val_compare_and_swap(dest, compare_value, exchange_value); + T value = compare_value; + FULL_MEM_BARRIER; + __atomic_compare_exchange(dest, &value, &exchange_value, /*weak*/false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED); + FULL_MEM_BARRIER; + return value; } } From aph at redhat.com Wed Nov 13 09:00:21 2019 From: aph at redhat.com (Andrew Haley) Date: Wed, 13 Nov 2019 09:00:21 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> Message-ID: <1325b063-cc1b-74fe-3b78-f4eb4518d116@redhat.com> On 11/13/19 8:36 AM, Yangfei (Felix) wrote: > Still not strong enough? considering the first of ldxr of the loop may be speculated. Come on now, you must have read the thread on kernel-dev you pointed me to. -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From felix.yang at huawei.com Wed Nov 13 09:26:37 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Wed, 13 Nov 2019 09:26:37 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <1325b063-cc1b-74fe-3b78-f4eb4518d116@redhat.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> <1325b063-cc1b-74fe-3b78-f4eb4518d116@redhat.com> Message-ID: > -----Original Message----- > From: Andrew Haley [mailto:aph at redhat.com] > Sent: Wednesday, November 13, 2019 5:00 PM > To: Yangfei (Felix) ; Erik ?sterlund > ; Andrew Dinn ; > aarch64-port-dev at openjdk.java.net > Cc: hotspot-dev at openjdk.java.net > Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic > operations > > On 11/13/19 8:36 AM, Yangfei (Felix) wrote: > > Still not strong enough? considering the first of ldxr of the loop may be > speculated. > > Come on now, you must have read the thread on kernel-dev you pointed me to. > Yes, the cmpxchg case is different here. So the v2 patch in my previous mail approved? Will create a bug and do necessary testing. Thanks, Felix From aph at redhat.com Wed Nov 13 09:39:05 2019 From: aph at redhat.com (Andrew Haley) Date: Wed, 13 Nov 2019 09:39:05 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> <1325b063-cc1b-74fe-3b78-f4eb4518d116@redhat.com> Message-ID: <9aa92f57-ea8a-7e0a-2218-bf360a13c46e@redhat.com> On 11/13/19 9:26 AM, Yangfei (Felix) wrote: > Yes, the cmpxchg case is different here. > So the v2 patch in my previous mail approved? > Will create a bug and do necessary testing. I don't know which patch is v2, but for the reasons carefully laid out in the kernel-dev thread we don't need two full barriers. The first version of Atomic::PlatformCmpxchg you posted is OK. -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From felix.yang at huawei.com Wed Nov 13 09:46:55 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Wed, 13 Nov 2019 09:46:55 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: <9aa92f57-ea8a-7e0a-2218-bf360a13c46e@redhat.com> References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> <1325b063-cc1b-74fe-3b78-f4eb4518d116@redhat.com> <9aa92f57-ea8a-7e0a-2218-bf360a13c46e@redhat.com> Message-ID: > -----Original Message----- > From: Andrew Haley [mailto:aph at redhat.com] > Sent: Wednesday, November 13, 2019 5:39 PM > To: Yangfei (Felix) ; Erik ?sterlund > ; Andrew Dinn ; > aarch64-port-dev at openjdk.java.net > Cc: hotspot-dev at openjdk.java.net > Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic > operations > > On 11/13/19 9:26 AM, Yangfei (Felix) wrote: > > Yes, the cmpxchg case is different here. > > So the v2 patch in my previous mail approved? > > Will create a bug and do necessary testing. > > I don't know which patch is v2, but for the reasons carefully laid out in the > kernel-dev thread we don't need two full barriers. The first version of > Atomic::PlatformCmpxchg you posted is OK. > Well, I think the cmpxchg case is different: the compare in the loop may fail and then we don't got a change to execute the stlxr instruction. This is explicitedly discussed in that thread: https://patchwork.kernel.org/patch/3575821/ As a result, aarch64 Linux plants two barriers in that patch: @@ -112,17 +114,20 @@ static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) unsigned long tmp; int oldval; + smp_mb(); < ======== + asm volatile("// atomic_cmpxchg\n" -"1: ldaxr %w1, %2\n" +"1: ldxr %w1, %2\n" " cmp %w1, %w3\n" " b.ne 2f\n" -" stlxr %w0, %w4, %2\n" +" stxr %w0, %w4, %2\n" " cbnz %w0, 1b\n" "2:" : "=&r" (tmp), "=&r" (oldval), "+Q" (ptr->counter) : "Ir" (old), "r" (new) : "cc", "memory"); + smp_mb(); < ======== return oldval; } That's why I switched to the V2 patch: diff -r 846fee5ea75e src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp --- a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Wed Nov 13 10:27:06 2019 +0900 +++ b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp Wed Nov 13 16:33:16 2019 +0800 @@ -52,7 +52,7 @@ T volatile* dest, atomic_memory_order order) const { STATIC_ASSERT(byte_size == sizeof(T)); - T res = __sync_lock_test_and_set(dest, exchange_value); + T res = __atomic_exchange_n(dest, exchange_value, __ATOMIC_RELEASE); FULL_MEM_BARRIER; return res; } @@ -70,7 +70,12 @@ __ATOMIC_RELAXED, __ATOMIC_RELAXED); return value; } else { - return __sync_val_compare_and_swap(dest, compare_value, exchange_value); + T value = compare_value; + FULL_MEM_BARRIER; + __atomic_compare_exchange(dest, &value, &exchange_value, /*weak*/false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED); + FULL_MEM_BARRIER; + return value; } } From matthias.baesken at sap.com Wed Nov 13 10:02:42 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Wed, 13 Nov 2019 10:02:42 +0000 Subject: RFR [XS]: 8234070: solaris sparc build fails in templateInterpreterGenerator.cpp - was RE: fastdebug Solaris build error in jdk/jdk Message-ID: Hello, please review the following change . It fixes the fastdebug Solaris Sparc build after recent changes 8233918 / 8233498 . Erik, may I add you as a reviewer ? Bug/webrev: https://bugs.openjdk.java.net/browse/JDK-8234070 http://cr.openjdk.java.net/~mbaesken/webrevs/8234070.0/ Thanks, Matthias From: Erik ?sterlund Sent: Mittwoch, 13. November 2019 09:41 To: Baesken, Matthias ; patric.hedlin at oracle.com Cc: Zeller, Arno ; Langer, Christoph Subject: Re: fastdebug Solaris build error in jdk/jdk Hi Matthias, Your suggestion sounds reasonable. Thanks, /Erik On 2019-11-13 09:29, Baesken, Matthias wrote: Hello, I just opened https://bugs.openjdk.java.net/browse/JDK-8234070 8234070: solaris sparc build fails in templateInterpreterGenerator.cpp After 8233498: Remove dead code. , and 8233918: 8233498 broke build on SPARC the (fast)debug Solaris build is still broken , error is /nightly/jdk/src/hotspot/share/interpreter/templateInterpreterGenerator.cpp", line 376: Error: verify_FPU is not a member of InterpreterMacroAssembler. 1 Error(s) detected. My suggestion would be to remove the verify_FPU call from templateInterpreterGenerator.cpp because it is a no-op anyway these days on almost all platforms and of very limited use . Best regards, Matthias From simonisv at amazon.de Wed Nov 13 10:36:05 2019 From: simonisv at amazon.de (Simonis, Volker) Date: Wed, 13 Nov 2019 10:36:05 +0000 Subject: Itlb_multihit Intel mitigation and JVM In-Reply-To: References: Message-ID: Am 12.11.2019 22:14 schrieb Bernd Eckenfels : > > Hello, > > With the latest Linux kernel updates a mitigation for the Intel MCE Problem with multiple iTLB page sizes hit the kernel. KVM Hypervisor will make large pages non executable and split them down to 4K pages if they are fetched for execution. > > https://www.phoronix.com/scan.php?page=news_item&px=iITLB-Multihit-TAA-Kernel-Code > > The mitigation on the host is visible here > /sys/devices/system/cpu/vulnerabilities/itlb_multihit > Can be controlled with bootflag kvm.nx_huge_pages=off > I researched a bit, it should show up as a split counter nx_largepages_splitted in kvm stats of debugfs on the Hypervisor. > > I wonder if anybody did already tests with the JVM under which conditions a JVM running in a KVM Hypervisor will trigger those page splits and suffer from it. > > As I understand this would require .text or codecache segments to be large pages. Is this triggered by the JVM, does it for example use transparent HP with madvice (only) on? > It is not the default, but it can be enabled with the -XX:+UseTranparentHugePages. This will allocate both, the Java Heap and the CodeCache with madvise(..., MADV_HUGEPAGE). > Does anybody have studied the impact on KVM Hypervisor and how are the other virtualization solutions protecting against this and holding up (for a mostly JVM based workload). > > Gruss > Bernd > -- > http://bernd.eckenfels.net Amazon Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B Sitz: Berlin Ust-ID: DE 289 237 879 From aph at redhat.com Wed Nov 13 10:38:25 2019 From: aph at redhat.com (Andrew Haley) Date: Wed, 13 Nov 2019 10:38:25 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> <1325b063-cc1b-74fe-3b78-f4eb4518d116@redhat.com> <9aa92f57-ea8a-7e0a-2218-bf360a13c46e@redhat.com> Message-ID: <1345fade-e1b4-42f0-c86f-9fd518431fcf@redhat.com> On 11/13/19 9:46 AM, Yangfei (Felix) wrote: > That's why I switched to the V2 patch: I see. This seems excessive. I doubt that there is any code in HotSpot that relies on such things, especially given that we've manage with mere sequential consistency for CMPXCHG for so long, but if you want to go for the full Howitzer I won't try to stop you. -- Andrew Haley (he/him) Java Platform Lead Engineer Red Hat UK Ltd. https://keybase.io/andrewhaley EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671 From coleen.phillimore at oracle.com Wed Nov 13 12:18:18 2019 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 13 Nov 2019 07:18:18 -0500 Subject: RFR (M) 8233913: Remove implicit conversion from Method* to methodHandle In-Reply-To: <4804796f-65d7-f1c0-2bd2-7975dd5979a2@oracle.com> References: <4804796f-65d7-f1c0-2bd2-7975dd5979a2@oracle.com> Message-ID: <35beeda1-6802-37f6-aa2b-dddcefc81ffa@oracle.com> Thanks Tobias! Coleen On 11/13/19 2:59 AM, Tobias Hartmann wrote: > Hi Coleen, > > On 11.11.19 20:59, coleen.phillimore at oracle.com wrote: >> open webrev at http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev >> bug link https://bugs.openjdk.java.net/browse/JDK-8233913 > Changes to compiler specific files look good to me. > > Best regards, > Tobias From claes.redestad at oracle.com Wed Nov 13 13:27:52 2019 From: claes.redestad at oracle.com (Claes Redestad) Date: Wed, 13 Nov 2019 14:27:52 +0100 Subject: RFR (M) 8233913: Remove implicit conversion from Method* to methodHandle In-Reply-To: References: Message-ID: <1d5dfee4-709f-7625-1998-cbd26609c372@oracle.com> Hi Coleen, On 2019-11-11 20:59, coleen.phillimore at oracle.com wrote: > > 149943356.15 > ? 262156.46 > 149213135.00 > ? 281141.80 > p = 0.000 I did some local testing and I can verify a similar improvement of around 400-500k fewer instructions on a Hello World. Great! > open webrev at http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev Looks good to me! /Claes From coleen.phillimore at oracle.com Wed Nov 13 13:41:35 2019 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Wed, 13 Nov 2019 08:41:35 -0500 Subject: RFR (M) 8233913: Remove implicit conversion from Method* to methodHandle In-Reply-To: <1d5dfee4-709f-7625-1998-cbd26609c372@oracle.com> References: <1d5dfee4-709f-7625-1998-cbd26609c372@oracle.com> Message-ID: <93b9923c-6921-02c7-d4e9-4ebf9a461c7c@oracle.com> On 11/13/19 8:27 AM, Claes Redestad wrote: > Hi Coleen, > > On 2019-11-11 20:59, coleen.phillimore at oracle.com wrote: >> >> 149943356.15 >> ? 262156.46 >> 149213135.00 >> ? 281141.80 >> p = 0.000 > > I did some local testing and I can verify a similar improvement of > around 400-500k fewer instructions on a Hello World. Great! Awesome!? Happy to help with performance. > >> open webrev at >> http://cr.openjdk.java.net/~coleenp/2019/8233913.01/webrev > > Looks good to me! Thanks! Coleen > > /Claes From harold.seigel at oracle.com Wed Nov 13 14:08:29 2019 From: harold.seigel at oracle.com (Harold Seigel) Date: Wed, 13 Nov 2019 09:08:29 -0500 Subject: RFR [XS]: 8234070: solaris sparc build fails in templateInterpreterGenerator.cpp - was RE: fastdebug Solaris build error in jdk/jdk In-Reply-To: References: Message-ID: <4b6bbdff-808b-2c36-fafb-ab803890e38a@oracle.com> Hi Matthias, I was able to successfully build Solaris Sparc with the latest sources.? Can you see if you still have this problem? It looks like Solaris Sparc could find the definition of verify_FPU() in hotspot/src/cpu/sparc/interp_masm_sparc.hpp:? void verify_FPU(int stack_depth, TosState state = ftos) {}????? // No-op. Thanks, Harold On 11/13/2019 5:02 AM, Baesken, Matthias wrote: > Hello, please review the following change . > > It fixes the fastdebug Solaris Sparc build after recent changes 8233918 / 8233498 . > > Erik, may I add you as a reviewer ? > > > > Bug/webrev: > https://bugs.openjdk.java.net/browse/JDK-8234070 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234070.0/ > > Thanks, Matthias > > > From: Erik ?sterlund > Sent: Mittwoch, 13. November 2019 09:41 > To: Baesken, Matthias ; patric.hedlin at oracle.com > Cc: Zeller, Arno ; Langer, Christoph > Subject: Re: fastdebug Solaris build error in jdk/jdk > > Hi Matthias, > > Your suggestion sounds reasonable. > > Thanks, > /Erik > On 2019-11-13 09:29, Baesken, Matthias wrote: > Hello, I just opened > > https://bugs.openjdk.java.net/browse/JDK-8234070 > > 8234070: solaris sparc build fails in templateInterpreterGenerator.cpp > > After 8233498: Remove dead code. , and 8233918: 8233498 broke build on SPARC > the (fast)debug Solaris build is still broken , error is > > /nightly/jdk/src/hotspot/share/interpreter/templateInterpreterGenerator.cpp", line 376: Error: verify_FPU is not a member of InterpreterMacroAssembler. > 1 Error(s) detected. > > > My suggestion would be to remove the verify_FPU call from templateInterpreterGenerator.cpp because it is a no-op anyway these days on almost all platforms and of very limited use . > > > > Best regards, Matthias > > > > From david.holmes at oracle.com Wed Nov 13 14:13:03 2019 From: david.holmes at oracle.com (David Holmes) Date: Thu, 14 Nov 2019 00:13:03 +1000 Subject: RFR [XS]: 8234070: solaris sparc build fails in templateInterpreterGenerator.cpp - was RE: fastdebug Solaris build error in jdk/jdk In-Reply-To: References: Message-ID: <2c2e4f4f-c3cf-ea3a-e113-524c7645c36a@oracle.com> On 13/11/2019 8:02 pm, Baesken, Matthias wrote: > Hello, please review the following change . > > It fixes the fastdebug Solaris Sparc build after recent changes 8233918 / 8233498 . Isn't this the problem that was fixed by https://bugs.openjdk.java.net/browse/JDK-8233918 ??? David ----- > Erik, may I add you as a reviewer ? > > > > Bug/webrev: > https://bugs.openjdk.java.net/browse/JDK-8234070 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234070.0/ > > Thanks, Matthias > > > From: Erik ?sterlund > Sent: Mittwoch, 13. November 2019 09:41 > To: Baesken, Matthias ; patric.hedlin at oracle.com > Cc: Zeller, Arno ; Langer, Christoph > Subject: Re: fastdebug Solaris build error in jdk/jdk > > Hi Matthias, > > Your suggestion sounds reasonable. > > Thanks, > /Erik > On 2019-11-13 09:29, Baesken, Matthias wrote: > Hello, I just opened > > https://bugs.openjdk.java.net/browse/JDK-8234070 > > 8234070: solaris sparc build fails in templateInterpreterGenerator.cpp > > After 8233498: Remove dead code. , and 8233918: 8233498 broke build on SPARC > the (fast)debug Solaris build is still broken , error is > > /nightly/jdk/src/hotspot/share/interpreter/templateInterpreterGenerator.cpp", line 376: Error: verify_FPU is not a member of InterpreterMacroAssembler. > 1 Error(s) detected. > > > My suggestion would be to remove the verify_FPU call from templateInterpreterGenerator.cpp because it is a no-op anyway these days on almost all platforms and of very limited use . > > > > Best regards, Matthias > > > > From matthias.baesken at sap.com Wed Nov 13 14:24:59 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Wed, 13 Nov 2019 14:24:59 +0000 Subject: RFR [XS]: 8234070: solaris sparc build fails in templateInterpreterGenerator.cpp - was RE: fastdebug Solaris build error in jdk/jdk In-Reply-To: <2c2e4f4f-c3cf-ea3a-e113-524c7645c36a@oracle.com> References: <2c2e4f4f-c3cf-ea3a-e113-524c7645c36a@oracle.com> Message-ID: > Isn't this the problem that was fixed by > > https://bugs.openjdk.java.net/browse/JDK-8233918 > Hi David, I just noticed we were ***one change*** behind this one in the night-make , so yes, when I pull+update to the current head it is fixed . Maybe still 8234070 should remove verify_FPU ( as far as I see, InterpreterMacroAssembler::verify_FPU is except on x86 32bit empty) , but if people think it is still useful there we can keep it . Best regards, Matthias > > On 13/11/2019 8:02 pm, Baesken, Matthias wrote: > > Hello, please review the following change . > > > > It fixes the fastdebug Solaris Sparc build after recent changes 8233918 / > 8233498 . > > Isn't this the problem that was fixed by > > https://bugs.openjdk.java.net/browse/JDK-8233918 > > ??? > > David > ----- > > > Erik, may I add you as a reviewer ? > > > > > > > > Bug/webrev: > > https://bugs.openjdk.java.net/browse/JDK-8234070 > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234070.0/ > > > > Thanks, Matthias > > > > > > From: Erik ?sterlund > > Sent: Mittwoch, 13. November 2019 09:41 > > To: Baesken, Matthias ; > patric.hedlin at oracle.com > > Cc: Zeller, Arno ; Langer, Christoph > > > Subject: Re: fastdebug Solaris build error in jdk/jdk > > > > Hi Matthias, > > > > Your suggestion sounds reasonable. > > > > Thanks, > > /Erik > > On 2019-11-13 09:29, Baesken, Matthias wrote: > > Hello, I just opened > > > > https://bugs.openjdk.java.net/browse/JDK-8234070 > > > > 8234070: solaris sparc build fails in templateInterpreterGenerator.cpp > > > > After 8233498: Remove dead code. , and 8233918: 8233498 broke build on > SPARC > > the (fast)debug Solaris build is still broken , error is > > > > > /nightly/jdk/src/hotspot/share/interpreter/templateInterpreterGenerator.c > pp", line 376: Error: verify_FPU is not a member of > InterpreterMacroAssembler. > > 1 Error(s) detected. > > > > > > My suggestion would be to remove the verify_FPU call from > templateInterpreterGenerator.cpp because it is a no-op anyway these > days on almost all platforms and of very limited use . > > > > > > > > Best regards, Matthias > > > > > > > > From daniel.daugherty at oracle.com Wed Nov 13 14:45:36 2019 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 13 Nov 2019 09:45:36 -0500 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> <1325b063-cc1b-74fe-3b78-f4eb4518d116@redhat.com> Message-ID: On 11/13/19 4:26 AM, Yangfei (Felix) wrote: >> -----Original Message----- >> From: Andrew Haley [mailto:aph at redhat.com] >> Sent: Wednesday, November 13, 2019 5:00 PM >> To: Yangfei (Felix) ; Erik ?sterlund >> ; Andrew Dinn ; >> aarch64-port-dev at openjdk.java.net >> Cc: hotspot-dev at openjdk.java.net >> Subject: Re: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic >> operations >> >> On 11/13/19 8:36 AM, Yangfei (Felix) wrote: >>> Still not strong enough? considering the first of ldxr of the loop may be >> speculated. >> >> Come on now, you must have read the thread on kernel-dev you pointed me to. >> > Yes, the cmpxchg case is different here. > So the v2 patch in my previous mail approved? > Will create a bug and do necessary testing. Is there a reason to not reopen this bug: JDK-8233912 aarch64: minor improvements of atomic operations https://bugs.openjdk.java.net/browse/JDK-8233912 Dan > > Thanks, > Felix From thomas.schatzl at oracle.com Wed Nov 13 15:23:18 2019 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 13 Nov 2019 16:23:18 +0100 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range In-Reply-To: <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> References: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> Message-ID: <8ced83cd-3374-4517-21b7-8f6401c1c81e@oracle.com> Hi, On 13.11.19 07:11, Kim Barrett wrote: >> On Nov 12, 2019, at 4:17 AM, Thomas Schatzl wrote: >> >> Hi all, >> >> I would like to introduce a small helper function to clamp a given value between a min/max value. >> This would unclutter a few MIN(MAX(value, ), ) statements for imho better readability. >> >> There are two places in (non-CMS) code remaining with the above statement, because in these cases it happens that a value min > max is passed, i.e. you potentially (already) get returned unexpected values. >> (I did remove that assert in this webrev) > > I think the clamp function should be asserting min <= max. > > I haven?t reviewed all of the changed uses yet, so not yet a review. > I re-added the assert, and re-checked in our CI with hs-tier1-5. For some reason there were some failures I thought I had fixed already. Sorry :( Here are new webrevs: http://cr.openjdk.java.net/~tschatzl/8233702/webrev.0_to_1/ (diff) http://cr.openjdk.java.net/~tschatzl/8233702/webrev.1/ (full) Thanks, Thomas From lutz.schmidt at sap.com Wed Nov 13 16:42:38 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Wed, 13 Nov 2019 16:42:38 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> Message-ID: Hi Kim, there is a new webrev at http://cr.openjdk.java.net/~lucy/webrevs/8233787.01/ It should be pretty close to what you view as the "right approach". There weren't too many changes relative to 8233787.00. Most files already had #include runtime/vm_version.hpp. See the following grep output: [jdk/src/hotspot] % grep -r vm_version_ * | grep include cpu/ppc/vm_version_ext_ppc.cpp:#include "vm_version_ext_ppc.hpp" cpu/s390/vm_version_ext_s390.cpp:#include "vm_version_ext_s390.hpp" cpu/zero/vm_version_ext_zero.cpp:#include "vm_version_ext_zero.hpp" cpu/x86/rdtsc_x86.cpp:#include "vm_version_ext_x86.hpp" cpu/x86/vm_version_ext_x86.cpp:#include "vm_version_ext_x86.hpp" cpu/arm/vm_version_ext_arm.cpp:#include "vm_version_ext_arm.hpp" cpu/aarch64/vm_version_ext_aarch64.cpp:#include "vm_version_ext_aarch64.hpp" cpu/sparc/vm_version_ext_sparc.cpp:#include "vm_version_ext_sparc.hpp" os/bsd/os_perf_bsd.cpp:#include CPU_HEADER(vm_version_ext) os/linux/os_perf_linux.cpp:#include CPU_HEADER(vm_version_ext) os/windows/os_perf_windows.cpp:#include CPU_HEADER(vm_version_ext) os/solaris/os_perf_solaris.cpp:#include CPU_HEADER(vm_version_ext) os/aix/os_perf_aix.cpp:#include CPU_HEADER(vm_version_ext) [jdk/src/hotspot] % [jdk/src/hotspot] % grep -r abstract_vm_version * | grep include cpu/ppc/vm_version_ppc.hpp:#include "runtime/abstract_vm_version.hpp" cpu/s390/vm_version_s390.hpp:#include "runtime/abstract_vm_version.hpp" cpu/zero/vm_version_zero.hpp:#include "runtime/abstract_vm_version.hpp" cpu/x86/vm_version_x86.hpp:#include "runtime/abstract_vm_version.hpp" cpu/arm/vm_version_arm.hpp:#include "runtime/abstract_vm_version.hpp" cpu/aarch64/vm_version_aarch64.hpp:#include "runtime/abstract_vm_version.hpp" cpu/sparc/vm_version_sparc.hpp:#include "runtime/abstract_vm_version.hpp" [jdk/src/hotspot] % Thanks for having a look! Lutz ?On 09.11.19, 02:58, "Kim Barrett" wrote: > On Nov 7, 2019, at 10:59 AM, Schmidt, Lutz wrote: > > Dear all, > > may I please request reviews for this cleanup? It's a lot of files with just some #include statement changes. That makes the review process tedious and not very challenging intellectually. > > Anyway, your effort is very much appreciated! > > jdk/submit results pending. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8233787 > Webrev: http://cr.openjdk.java.net/~lucy/webrevs/8233787.00/ > > Thank you! > Lutz I don't think this is the right approach. It makes all the vm_version_.hpp files not be stand alone, which I think is not a good idea. I thik the real problem is that Abstract_VM_Version is declared in vm_version.hpp. I think that file should be split into abstract_vm_version.hpp (with most of what's currently in vm_version.hpp), with vm_version.hpp being just (untested) #ifndef SHARE_RUNTIME_VM_VERSION_HPP #define SHARE_RUNTIME_VM_VERSION_HPP #include "utilities/macros.hpp" #include CPU_HEADER(vm_version) #endif // SHARE_RUNTIME_VM_VERSION_HPP Change all the vm_version_.hpp files #include abstract_vm_version.hpp rather than vm_version.hpp. Other than in vm_version_.hpp files, always #include vm_version.hpp. From brent.christian at oracle.com Wed Nov 13 18:37:19 2019 From: brent.christian at oracle.com (Brent Christian) Date: Wed, 13 Nov 2019 10:37:19 -0800 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking Message-ID: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> Hi, Recently, the 2-arg and 3-arg Class.forName() methods were updated[1] to perform class linking, per the specification. However this change had to be reverted[2]. Instead, let's clarify the Class.forName() spec not to guarantee linking (outside the case of also performing initialization, of course). This is the long-standing behavior. I also have a test of the non-linking behavior; it's based on the test case[3] for JDK-8231924. It fails as of 14b14 (8212117) and passes as of 14b22 (8233091). Please review my webrev: http://cr.openjdk.java.net/~bchristi/8233272/webrev-02/ If the wording looks good, I'll fill in the Specification for the CSR[4] I've started. Thanks, -Brent 1. https://bugs.openjdk.java.net/browse/JDK-8212117 2. https://bugs.openjdk.java.net/browse/JDK-8233091 3. https://mail.openjdk.java.net/pipermail/core-libs-dev/2019-October/062747.html 4. https://bugs.openjdk.java.net/browse/JDK-8233554 From gerard.ziemski at oracle.com Wed Nov 13 18:50:30 2019 From: gerard.ziemski at oracle.com (gerard ziemski) Date: Wed, 13 Nov 2019 12:50:30 -0600 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" Message-ID: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> Hi all, Please review this cleanup, where we remove JDK_GetVersionInfo0 and related code, since we can get build versions directly from within the VM itself: I'm including core-libs and awt in this review because the proposed fix touches their corresponding files. bug: https://bugs.openjdk.java.net/browse/JDK-8223261 webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 tests: passes Mach5 tier1,2,3,4,5,6 cheers From claes.redestad at oracle.com Wed Nov 13 19:04:00 2019 From: claes.redestad at oracle.com (Claes Redestad) Date: Wed, 13 Nov 2019 20:04:00 +0100 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" In-Reply-To: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> References: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> Message-ID: On 2019-11-13 19:50, gerard ziemski wrote: > webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 Nice cleanup! Looks good to me. /Claes From mandy.chung at oracle.com Wed Nov 13 19:29:52 2019 From: mandy.chung at oracle.com (Mandy Chung) Date: Wed, 13 Nov 2019 11:29:52 -0800 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" In-Reply-To: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> References: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> Message-ID: On 11/13/19 10:50 AM, gerard ziemski wrote: > Hi all, > > Please review this cleanup, where we remove JDK_GetVersionInfo0 and > related code, since we can get build versions directly from within the > VM itself: > > I'm including core-libs and awt in this review because the proposed > fix touches their corresponding files. > > > bug: https://bugs.openjdk.java.net/browse/JDK-8223261 > webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 > tests: passes Mach5 tier1,2,3,4,5,6 > This is a good clean up.? JDK_GetVersionInfo0 was needed long time ago in particular in HS express support that is no longer applicable. One leftover comment should also be removed. src/hotspot/share/runtime/vm_version.hpp ? // Gets the jvm_version_info.jvm_version defined in jvm.h otherwise looks good. Mandy From christoph.langer at sap.com Wed Nov 13 20:05:12 2019 From: christoph.langer at sap.com (Langer, Christoph) Date: Wed, 13 Nov 2019 20:05:12 +0000 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" In-Reply-To: References: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> Message-ID: Hi Gerard, generally it looks like a nice cleanup. I've got a patch prepared though, which I was planning on posting tomorrow. It is about cleanup for the canonicalize function in libjava. I wanted to use jdk_util.h for the function prototype. I had not yet filed a bug but here is what I have: http://cr.openjdk.java.net/~clanger/webrevs/cleanup-canonicalize/ So maybe you could refrain from removing jdk_util.h or maybe you can hold off submitting your change until my cleanup is reviewed? I'll create a bug and post an official review thread tomorrow... Thanks Christoph > -----Original Message----- > From: hotspot-dev On Behalf Of > Mandy Chung > Sent: Mittwoch, 13. November 2019 20:30 > To: gerard ziemski > Cc: awt-dev at openjdk.java.net; hotspot-dev developers dev at openjdk.java.net>; core-libs-dev at openjdk.java.net > Subject: Re: RFR (M) 8223261 "JDK-8189208 followup - remove > JDK_GetVersionInfo0 and the supporting code" > > > > On 11/13/19 10:50 AM, gerard ziemski wrote: > > Hi all, > > > > Please review this cleanup, where we remove JDK_GetVersionInfo0 and > > related code, since we can get build versions directly from within the > > VM itself: > > > > I'm including core-libs and awt in this review because the proposed > > fix touches their corresponding files. > > > > > > bug: https://bugs.openjdk.java.net/browse/JDK-8223261 > > webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 > > tests: passes Mach5 tier1,2,3,4,5,6 > > > > This is a good clean up.? JDK_GetVersionInfo0 was needed long time ago > in particular in HS express support that is no longer applicable. > > One leftover comment should also be removed. > > src/hotspot/share/runtime/vm_version.hpp > ? // Gets the jvm_version_info.jvm_version defined in jvm.h > > otherwise looks good. > > Mandy From kim.barrett at oracle.com Wed Nov 13 23:34:46 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 13 Nov 2019 18:34:46 -0500 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> Message-ID: <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> > On Nov 13, 2019, at 11:42 AM, Schmidt, Lutz wrote: > > Hi Kim, > > there is a new webrev at http://cr.openjdk.java.net/~lucy/webrevs/8233787.01/ > > It should be pretty close to what you view as the "right approach". There weren't too many changes relative to 8233787.00. Most files already had #include runtime/vm_version.hpp. This looks much better to me, but many (most?) of the changed #includes need to be moved into sort order. ------------------------------------------------------------------------------ src/hotspot/share/runtime/vm_version.cpp Abstract_VM_Version definitions should be moved to abstract_vm_version.cpp. Maybe just rename the file; I think the only thing that would be left for vm_version.cpp would be VM_Version_init(). But maybe that should be left behind in vm_version.cpp? Though that makes the review messier. ------------------------------------------------------------------------------ src/hotspot/share/runtime/abstract_vm_version.hpp Should #include globalDefinitions.hpp. - uint64_t features() - #define SUPPORTS_NATIVE_CX8 Should forward-declare class outsputStream. - print_virtualization_info - print_matching_lines_from_file (I wonder why this is *here*, but not your problem) ------------------------------------------------------------------------------ From jschlather at hubspot.com Thu Nov 14 02:56:58 2019 From: jschlather at hubspot.com (Jacob Schlather) Date: Wed, 13 Nov 2019 21:56:58 -0500 Subject: Native Memory Tracking Bug Message-ID: We're currently in the process of upgrading our Java applications from Java 8 to Java 11. After deploying some of our production applications with Java 11, we began to see the resident memory size grow without bound until our orchestrator killed the applications for excessive memory usage. We've started to debug this issue, but noticed that the NMT output appears to be incorrect. In particular the Compiler section is displaying Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) (malloc=4896KB +1206KB #4132 +508) (arena=18014398509481617KB -196 #5) Obviously the arena value here is quite wrong and there's no way the reserved memory can be less than the malloc memory. Further there's a 276305KB gap in the RSS size reported by our metrics and the amount of memory NMT reports as committed. Here's our JVM args and JDK version, I've additionally attached the full output of the NMT detailed diff. Running java11 with JVM arguments: -Djava.net.preferIPv4Stack=true -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError -XX:-OmitStackTraceInFastThrow -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json -XX:-PreferContainerQuotaForCPUCount -XX:NativeMemoryTracking=detail -jar REDACTED openjdk version "11.0.5" 2019-10-15 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed mode) From jschlather at hubspot.com Thu Nov 14 03:02:58 2019 From: jschlather at hubspot.com (Jacob Schlather) Date: Wed, 13 Nov 2019 22:02:58 -0500 Subject: Native Memory Tracking Bug In-Reply-To: References: Message-ID: Sorry, looks like my file attachment was removed here's a gist https://gist.github.com/jschlather/51828672756d0dee94591e7943490aa5 of the NMT output. > From zgu at redhat.com Thu Nov 14 03:13:54 2019 From: zgu at redhat.com (Zhengyu Gu) Date: Wed, 13 Nov 2019 22:13:54 -0500 Subject: Native Memory Tracking Bug In-Reply-To: References: Message-ID: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> Hi Jacob, It looks like JDK-8204128 [1] strikes again. It would be very helpful if you can provide a reproducer. Thanks, -Zhengyu [1] https://bugs.openjdk.java.net/browse/JDK-8204128 On 11/13/19 9:56 PM, Jacob Schlather wrote: > We're currently in the process of upgrading our Java applications from Java > 8 to Java 11. After deploying some of our production applications with Java > 11, we began to see the resident memory size grow without bound until our > orchestrator killed the applications for excessive memory usage. We've > started to debug this issue, but noticed that the NMT output appears to be > incorrect. In particular the Compiler section is displaying > > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) > (malloc=4896KB +1206KB #4132 +508) > (arena=18014398509481617KB -196 #5) > > Obviously the arena value here is quite wrong and there's no way the > reserved memory can be less than the malloc memory. Further there's > a 276305KB gap in the RSS size reported by our metrics and the amount of > memory NMT reports as committed. Here's our JVM args and JDK version, I've > additionally attached the full output of the NMT detailed diff. > > Running java11 with JVM arguments: -Djava.net.preferIPv4Stack=true > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:+UnlockExperimentalVMOptions > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError > -XX:-OmitStackTraceInFastThrow > -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json > -XX:-PreferContainerQuotaForCPUCount -XX:NativeMemoryTracking=detail > -jar REDACTED > openjdk version "11.0.5" 2019-10-15 > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed mode) > From david.holmes at oracle.com Thu Nov 14 03:21:50 2019 From: david.holmes at oracle.com (David Holmes) Date: Thu, 14 Nov 2019 13:21:50 +1000 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" In-Reply-To: References: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> Message-ID: Hi Gerard, On 14/11/2019 6:05 am, Langer, Christoph wrote: > Hi Gerard, > > generally it looks like a nice cleanup. > > I've got a patch prepared though, which I was planning on posting tomorrow. It is about cleanup for the canonicalize function in libjava. I wanted to use jdk_util.h for the function prototype. I had not yet filed a bug but here is what I have: > http://cr.openjdk.java.net/~clanger/webrevs/cleanup-canonicalize/ > > So maybe you could refrain from removing jdk_util.h or maybe you can hold off submitting your change until my cleanup is reviewed? I'd also suggest not deleting jdk_util.h. It seems very odd to me to have jdk_util_md.h with no shared jdk_util.h. If you keep jdk_util.h then you don't need to touch a number of the files. Otherwise this looks like a good cleanup. Thanks, David ----- > I'll create a bug and post an official review thread tomorrow... > > Thanks > Christoph > >> -----Original Message----- >> From: hotspot-dev On Behalf Of >> Mandy Chung >> Sent: Mittwoch, 13. November 2019 20:30 >> To: gerard ziemski >> Cc: awt-dev at openjdk.java.net; hotspot-dev developers > dev at openjdk.java.net>; core-libs-dev at openjdk.java.net >> Subject: Re: RFR (M) 8223261 "JDK-8189208 followup - remove >> JDK_GetVersionInfo0 and the supporting code" >> >> >> >> On 11/13/19 10:50 AM, gerard ziemski wrote: >>> Hi all, >>> >>> Please review this cleanup, where we remove JDK_GetVersionInfo0 and >>> related code, since we can get build versions directly from within the >>> VM itself: >>> >>> I'm including core-libs and awt in this review because the proposed >>> fix touches their corresponding files. >>> >>> >>> bug: https://bugs.openjdk.java.net/browse/JDK-8223261 >>> webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 >>> tests: passes Mach5 tier1,2,3,4,5,6 >>> >> >> This is a good clean up.? JDK_GetVersionInfo0 was needed long time ago >> in particular in HS express support that is no longer applicable. >> >> One leftover comment should also be removed. >> >> src/hotspot/share/runtime/vm_version.hpp >> ? // Gets the jvm_version_info.jvm_version defined in jvm.h >> >> otherwise looks good. >> >> Mandy From kim.barrett at oracle.com Thu Nov 14 03:27:30 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 13 Nov 2019 22:27:30 -0500 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range In-Reply-To: <8ced83cd-3374-4517-21b7-8f6401c1c81e@oracle.com> References: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> <8ced83cd-3374-4517-21b7-8f6401c1c81e@oracle.com> Message-ID: > On Nov 13, 2019, at 10:23 AM, Thomas Schatzl wrote: > > I re-added the assert, and re-checked in our CI with hs-tier1-5. For some reason there were some failures I thought I had fixed already. Sorry :( > > Here are new webrevs: > > http://cr.openjdk.java.net/~tschatzl/8233702/webrev.0_to_1/ (diff) > http://cr.openjdk.java.net/~tschatzl/8233702/webrev.1/ (full) > > Thanks, > Thomas ------------------------------------------------------------------------------ src/hotspot/share/compiler/compilerDefinitions.cpp 355 FLAG_SET_DEFAULT(MetaspaceSize, clamp(MetaspaceSize, 12*M, MaxMetaspaceSize)); I've not found anything that guarantees MaxMetaspaceSize >= 12*M. ------------------------------------------------------------------------------ src/hotspot/share/gc/shared/threadLocalAllocBuffer.cpp 254 // We can't use clamp() here because min_size() and max_size() because some s/here because min_size()/here between min_size()/ ------------------------------------------------------------------------------ src/hotspot/share/runtime/globals.hpp 1408 product(intx, AllocatePrefetchDistance, -1, \ 1409 "Distance to prefetch ahead of allocation pointer. " \ 1410 "-1: use system-specific value (automatically determined") \ 1411 range(-1, 512) \ 1412 constraint(AllocatePrefetchDistanceConstraintFunc,AfterMemoryInit)\ With the addition of the range restriction, is the constraint function still needed? I don't remember whether a range restriction is applied to assignments such as are being done in various vm_version_.cpp files. ------------------------------------------------------------------------------ From Sergey.Bylokhov at oracle.com Thu Nov 14 03:44:05 2019 From: Sergey.Bylokhov at oracle.com (Sergey Bylokhov) Date: Wed, 13 Nov 2019 19:44:05 -0800 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" In-Reply-To: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> References: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> Message-ID: <52ca300d-c870-a7d4-934d-7e5fcfe77446@oracle.com> Looks fine. On 11/13/19 10:50 am, gerard ziemski wrote: > Hi all, > > Please review this cleanup, where we remove JDK_GetVersionInfo0 and related code, since we can get build versions directly from within the VM itself: > > I'm including core-libs and awt in this review because the proposed fix touches their corresponding files. > > > bug: https://bugs.openjdk.java.net/browse/JDK-8223261 > webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 > tests: passes Mach5 tier1,2,3,4,5,6 > > > cheers > -- Best regards, Sergey. From felix.yang at huawei.com Thu Nov 14 12:26:15 2019 From: felix.yang at huawei.com (Yangfei (Felix)) Date: Thu, 14 Nov 2019 12:26:15 +0000 Subject: [aarch64-port-dev ] RFR: aarch64: minor improvements of atomic operations In-Reply-To: References: <65e93675-a3cf-53ac-6894-bb4124c55f93@redhat.com> <1f4c99ac-461c-7795-1a74-a494bdba3672@redhat.com> <1cc3ab16-eaab-d031-3df0-c9133de24f88@redhat.com> <8b527457-c371-45ae-bb54-0a048f9ee6f8@redhat.com> <32ea3e22-9f7a-9aaa-c86a-79ed175a1c7b@redhat.com> <83f92211-2c64-69d0-457c-c059acbccf63@oracle.com> <0d718a85-d669-f4b4-ae90-db1f7bb56b45@redhat.com> <1325b063-cc1b-74fe-3b78-f4eb4518d116@redhat.com> Message-ID: . > > Is there a reason to not reopen this bug: > > JDK-8233912 aarch64: minor improvements of atomic operations > https://bugs.openjdk.java.net/browse/JDK-8233912 > > Dan > Reopend and modified problem description on that bug. Webrev: http://cr.openjdk.java.net/~fyang/8233912/webrev.00/ The webrev also adds one comment from aph. Passed tier1 & 2 & 3 test. Also run jcstress test. Will do the push. Thanks, Felix From gerard.ziemski at oracle.com Thu Nov 14 15:10:14 2019 From: gerard.ziemski at oracle.com (gerard ziemski) Date: Thu, 14 Nov 2019 09:10:14 -0600 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" In-Reply-To: References: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> Message-ID: Thank you for the review, will remove the comment and updated the webrev, but only after Christop Langer gets his fix in - I'm going to wait for him to check in first. cheers On 11/13/19 1:29 PM, Mandy Chung wrote: > > > On 11/13/19 10:50 AM, gerard ziemski wrote: >> Hi all, >> >> Please review this cleanup, where we remove JDK_GetVersionInfo0 and >> related code, since we can get build versions directly from within >> the VM itself: >> >> I'm including core-libs and awt in this review because the proposed >> fix touches their corresponding files. >> >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8223261 >> webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 >> tests: passes Mach5 tier1,2,3,4,5,6 >> > > This is a good clean up.? JDK_GetVersionInfo0 was needed long time ago > in particular in HS express support that is no longer applicable. > > One leftover comment should also be removed. > > src/hotspot/share/runtime/vm_version.hpp > ? // Gets the jvm_version_info.jvm_version defined in jvm.h > > otherwise looks good. > > Mandy From gerard.ziemski at oracle.com Thu Nov 14 15:11:16 2019 From: gerard.ziemski at oracle.com (gerard ziemski) Date: Thu, 14 Nov 2019 09:11:16 -0600 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" In-Reply-To: References: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> Message-ID: <59bfddd2-34ec-5fe9-1f01-481fb749af5d@oracle.com> Thank you for the review. I'm happy to wait for your fix to go in first - this will make my fix smaller. cheers On 11/13/19 2:05 PM, Langer, Christoph wrote: > Hi Gerard, > > generally it looks like a nice cleanup. > > I've got a patch prepared though, which I was planning on posting tomorrow. It is about cleanup for the canonicalize function in libjava. I wanted to use jdk_util.h for the function prototype. I had not yet filed a bug but here is what I have: > http://cr.openjdk.java.net/~clanger/webrevs/cleanup-canonicalize/ > > So maybe you could refrain from removing jdk_util.h or maybe you can hold off submitting your change until my cleanup is reviewed? > > I'll create a bug and post an official review thread tomorrow... > > Thanks > Christoph > >> -----Original Message----- >> From: hotspot-dev On Behalf Of >> Mandy Chung >> Sent: Mittwoch, 13. November 2019 20:30 >> To: gerard ziemski >> Cc: awt-dev at openjdk.java.net; hotspot-dev developers > dev at openjdk.java.net>; core-libs-dev at openjdk.java.net >> Subject: Re: RFR (M) 8223261 "JDK-8189208 followup - remove >> JDK_GetVersionInfo0 and the supporting code" >> >> >> >> On 11/13/19 10:50 AM, gerard ziemski wrote: >>> Hi all, >>> >>> Please review this cleanup, where we remove JDK_GetVersionInfo0 and >>> related code, since we can get build versions directly from within the >>> VM itself: >>> >>> I'm including core-libs and awt in this review because the proposed >>> fix touches their corresponding files. >>> >>> >>> bug: https://bugs.openjdk.java.net/browse/JDK-8223261 >>> webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 >>> tests: passes Mach5 tier1,2,3,4,5,6 >>> >> This is a good clean up.? JDK_GetVersionInfo0 was needed long time ago >> in particular in HS express support that is no longer applicable. >> >> One leftover comment should also be removed. >> >> src/hotspot/share/runtime/vm_version.hpp >> ? // Gets the jvm_version_info.jvm_version defined in jvm.h >> >> otherwise looks good. >> >> Mandy From gerard.ziemski at oracle.com Thu Nov 14 15:13:41 2019 From: gerard.ziemski at oracle.com (gerard ziemski) Date: Thu, 14 Nov 2019 09:13:41 -0600 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" In-Reply-To: References: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> Message-ID: <306d4073-7d64-d807-b670-52126462e81c@oracle.com> Thank you for the review. I'm definitively going to wait for Christoph to check in his fix first. I tried in fact to leave jdk_util.c/.h files in empty without removing them from the repo, and even though Mac/Linux were OK with that, Solaris/Windows were not. cheers On 11/13/19 9:21 PM, David Holmes wrote: > Hi Gerard, > > On 14/11/2019 6:05 am, Langer, Christoph wrote: >> Hi Gerard, >> >> generally it looks like a nice cleanup. >> >> I've got a patch prepared though, which I was planning on posting >> tomorrow. It is about cleanup for the canonicalize function in >> libjava. I wanted to use jdk_util.h for the function prototype. I had >> not yet filed a bug but here is what I have: >> http://cr.openjdk.java.net/~clanger/webrevs/cleanup-canonicalize/ >> >> So maybe you could refrain from removing jdk_util.h or maybe you can >> hold off submitting your change until my cleanup is reviewed? > > I'd also suggest not deleting jdk_util.h. It seems very odd to me to > have jdk_util_md.h with no shared jdk_util.h. If you keep jdk_util.h > then you don't need to touch a number of the files. > > Otherwise this looks like a good cleanup. > > Thanks, > David > ----- > >> I'll create a bug and post an official review thread tomorrow... >> >> Thanks >> Christoph >> >>> -----Original Message----- >>> From: hotspot-dev On Behalf Of >>> Mandy Chung >>> Sent: Mittwoch, 13. November 2019 20:30 >>> To: gerard ziemski >>> Cc: awt-dev at openjdk.java.net; hotspot-dev developers >> dev at openjdk.java.net>; core-libs-dev at openjdk.java.net >>> Subject: Re: RFR (M) 8223261 "JDK-8189208 followup - remove >>> JDK_GetVersionInfo0 and the supporting code" >>> >>> >>> >>> On 11/13/19 10:50 AM, gerard ziemski wrote: >>>> Hi all, >>>> >>>> Please review this cleanup, where we remove JDK_GetVersionInfo0 and >>>> related code, since we can get build versions directly from within the >>>> VM itself: >>>> >>>> I'm including core-libs and awt in this review because the proposed >>>> fix touches their corresponding files. >>>> >>>> >>>> bug: https://bugs.openjdk.java.net/browse/JDK-8223261 >>>> webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 >>>> tests: passes Mach5 tier1,2,3,4,5,6 >>>> >>> >>> This is a good clean up.? JDK_GetVersionInfo0 was needed long time ago >>> in particular in HS express support that is no longer applicable. >>> >>> One leftover comment should also be removed. >>> >>> src/hotspot/share/runtime/vm_version.hpp >>> ? ? // Gets the jvm_version_info.jvm_version defined in jvm.h >>> >>> otherwise looks good. >>> >>> Mandy From thomas.schatzl at oracle.com Thu Nov 14 15:58:51 2019 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 14 Nov 2019 16:58:51 +0100 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range In-Reply-To: References: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> <8ced83cd-3374-4517-21b7-8f6401c1c81e@oracle.com> Message-ID: <290c83af-f554-9236-4f85-740736374e8d@oracle.com> AllocatePrefetchDistance Hi, On 14.11.19 04:27, Kim Barrett wrote: >> On Nov 13, 2019, at 10:23 AM, Thomas Schatzl wrote: >> >> I re-added the assert, and re-checked in our CI with hs-tier1-5. For some reason there were some failures I thought I had fixed already. Sorry :( >> >> Here are new webrevs: >> >> http://cr.openjdk.java.net/~tschatzl/8233702/webrev.0_to_1/ (diff) >> http://cr.openjdk.java.net/~tschatzl/8233702/webrev.1/ (full) >> >> Thanks, >> Thomas > > ------------------------------------------------------------------------------ > src/hotspot/share/compiler/compilerDefinitions.cpp > 355 FLAG_SET_DEFAULT(MetaspaceSize, clamp(MetaspaceSize, 12*M, MaxMetaspaceSize)); > > I've not found anything that guarantees MaxMetaspaceSize >= 12*M. > Reverted. > ------------------------------------------------------------------------------ > src/hotspot/share/gc/shared/threadLocalAllocBuffer.cpp > 254 // We can't use clamp() here because min_size() and max_size() because some > > s/here because min_size()/here between min_size()/ Fixed. > > ------------------------------------------------------------------------------ > src/hotspot/share/runtime/globals.hpp > 1408 product(intx, AllocatePrefetchDistance, -1, \ > 1409 "Distance to prefetch ahead of allocation pointer. " \ > 1410 "-1: use system-specific value (automatically determined") \ > 1411 range(-1, 512) \ > 1412 constraint(AllocatePrefetchDistanceConstraintFunc,AfterMemoryInit)\ > > With the addition of the range restriction, is the constraint function > still needed? I don't remember whether a range restriction is applied > to assignments such as are being done in various vm_version_.cpp > files. > Yes, because the function is restricting between 0 and 512, but a -1 input value is allowed. But I reverted this change because ultimately it is the same issue as the one for MinTLABSize in ThreadLocalAllocBuffer::initial_desired_size() which required me to not use clamp() there. I.e. at that time some variables (including AllocatePrefetchDistance) are not stable yet. However when I did the changes I first fixed the AllocatePrefetchDistance issue, then removed the clamp() there without reconsidering the earlier change. New webrevs: http://cr.openjdk.java.net/~tschatzl/8233702/webrev.1_to_2/ (diff) http://cr.openjdk.java.net/~tschatzl/8233702/webrev.2/ (full) Passed hs-tier1-5. Thanks, Thomas From mandy.chung at oracle.com Thu Nov 14 16:22:36 2019 From: mandy.chung at oracle.com (Mandy Chung) Date: Thu, 14 Nov 2019 08:22:36 -0800 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> Message-ID: <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> On 11/13/19 10:37 AM, Brent Christian wrote: > Hi, > > Recently, the 2-arg and 3-arg Class.forName() methods were updated[1] > to perform class linking, per the specification. However this change > had to be reverted[2]. > > Instead, let's clarify the Class.forName() spec not to guarantee > linking (outside the case of also performing initialization, of > course).? This is the long-standing behavior. > > I also have a test of the non-linking behavior; it's based on the test > case[3] for JDK-8231924.? It fails as of 14b14 (8212117) and passes as > of 14b22 (8233091). > > Please review my webrev: > http://cr.openjdk.java.net/~bchristi/8233272/webrev-02/ > > If the wording looks good, I'll fill in the Specification for the > CSR[4] I've started. The spec change looks fine. As for the test, I expect that it simply calls Class.forName("Provider", false, ucl) and then should succeed. Then calling Class.forName("Provider", true, ucl) should fail with an error (I think it's EIIE with NCDFE?).? This way it verifies that initialization/linking does cause NCDFE during verification while Class.forName does not do linking if initialize=false. Mandy From lutz.schmidt at sap.com Thu Nov 14 16:26:32 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Thu, 14 Nov 2019 16:26:32 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> Message-ID: Hi Kim, that wasn't straightforward. Had to adapt make/hotspot/lib/CompileJvm.gmk. Build settings like HOTSPOT_VERSION_STRING have to flow into the compile step of abstract_vm_version.cpp now. For the details, see my comments below. Other than that, I hope the new webrev is even closer to your dreams: http://cr.openjdk.java.net/~lucy/webrevs/8233787.02/ Thanks, Lutz ?On 14.11.19, 00:34, "Kim Barrett" wrote: > On Nov 13, 2019, at 11:42 AM, Schmidt, Lutz wrote: > > Hi Kim, > > there is a new webrev at http://cr.openjdk.java.net/~lucy/webrevs/8233787.01/ > > It should be pretty close to what you view as the "right approach". There weren't too many changes relative to 8233787.00. Most files already had #include runtime/vm_version.hpp. This looks much better to me, but many (most?) of the changed #includes need to be moved into sort order. R: tried my best to fix the sort order. Sorry for not paying attention in the first place. ------------------------------------------------------------------------------ src/hotspot/share/runtime/vm_version.cpp Abstract_VM_Version definitions should be moved to abstract_vm_version.cpp. Maybe just rename the file; I think the only thing that would be left for vm_version.cpp would be VM_Version_init(). But maybe that should be left behind in vm_version.cpp? Though that makes the review messier. R: Everything moved as you suggested. Doesn't make sense to have Abstract_VM_Version:: methods in vm_version.cpp file. ------------------------------------------------------------------------------ src/hotspot/share/runtime/abstract_vm_version.hpp Should #include globalDefinitions.hpp. - uint64_t features() - #define SUPPORTS_NATIVE_CX8 R: Done. Should forward-declare class outsputStream. - print_virtualization_info - print_matching_lines_from_file (I wonder why this is *here*, but not your problem) R: Done. ------------------------------------------------------------------------------ From jschlather at hubspot.com Thu Nov 14 17:18:35 2019 From: jschlather at hubspot.com (Jacob Schlather) Date: Thu, 14 Nov 2019 12:18:35 -0500 Subject: Native Memory Tracking Bug In-Reply-To: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> References: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> Message-ID: Hi Zhengyu, This bug seems to happen every time for one of our web services running in production, but it doesn't happen in qa, so I don't think I'll be able to provide a reproducer. If there are some lightweight debugging options we could turn on and provide that output to you, I could do that. Having dug into the memory issue we're seeing, what it looks like is that in Java 8 the committed memory as shown by NMT is significantly less than the memory the JVM was actually using. Now on Java 11, the JVM seems to be using much closer to the committed memory. Do you happen to know of anything we could look into for that? Thanks. On Wed, Nov 13, 2019 at 10:14 PM Zhengyu Gu wrote: > Hi Jacob, > > It looks like JDK-8204128 [1] strikes again. It would be very helpful if > you can provide a reproducer. > > Thanks, > > -Zhengyu > > > [1] https://bugs.openjdk.java.net/browse/JDK-8204128 > > > > On 11/13/19 9:56 PM, Jacob Schlather wrote: > > We're currently in the process of upgrading our Java applications from > Java > > 8 to Java 11. After deploying some of our production applications with > Java > > 11, we began to see the resident memory size grow without bound until our > > orchestrator killed the applications for excessive memory usage. We've > > started to debug this issue, but noticed that the NMT output appears to > be > > incorrect. In particular the Compiler section is displaying > > > > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) > > (malloc=4896KB +1206KB #4132 +508) > > (arena=18014398509481617KB -196 #5) > > > > Obviously the arena value here is quite wrong and there's no way the > > reserved memory can be less than the malloc memory. Further there's > > a 276305KB gap in the RSS size reported by our metrics and the amount of > > memory NMT reports as committed. Here's our JVM args and JDK version, > I've > > additionally attached the full output of the NMT detailed diff. > > > > Running java11 with JVM arguments: -Djava.net.preferIPv4Stack=true > > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m > > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m > > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:+UnlockExperimentalVMOptions > > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 > > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 > > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem > > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m > > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs > > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError > > -XX:-OmitStackTraceInFastThrow > > > -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json > > -XX:-PreferContainerQuotaForCPUCount -XX:NativeMemoryTracking=detail > > -jar REDACTED > > openjdk version "11.0.5" 2019-10-15 > > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) > > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed mode) > > > From jschlather at hubspot.com Thu Nov 14 18:06:47 2019 From: jschlather at hubspot.com (Jacob Schlather) Date: Thu, 14 Nov 2019 13:06:47 -0500 Subject: Native Memory Tracking Bug In-Reply-To: References: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> Message-ID: Upon further investigation it appears that NMT is not accurately tracking the memory usage of the JVM. I ran NMT summary and got the following output Total: reserved=9095231KB, committed=8746919KB And then I ran pmap on the host to check memory usage sudo pmap -x 89186 | tail -n 1 total kB 34497556 9397564 9352480 Is this sort of gap expected? On Thu, Nov 14, 2019 at 12:18 PM Jacob Schlather wrote: > Hi Zhengyu, > > This bug seems to happen every time for one of our web services running in > production, but it doesn't happen in qa, so I don't think I'll be able to > provide a reproducer. If there are some lightweight debugging options we > could turn on and provide that output to you, I could do that. > > Having dug into the memory issue we're seeing, what it looks like is that > in Java 8 the committed memory as shown by NMT is significantly less than > the memory the JVM was actually using. Now on Java 11, the JVM seems to be > using much closer to the committed memory. Do you happen to know of > anything we could look into for that? Thanks. > > On Wed, Nov 13, 2019 at 10:14 PM Zhengyu Gu wrote: > >> Hi Jacob, >> >> It looks like JDK-8204128 [1] strikes again. It would be very helpful if >> you can provide a reproducer. >> >> Thanks, >> >> -Zhengyu >> >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8204128 >> >> >> >> On 11/13/19 9:56 PM, Jacob Schlather wrote: >> > We're currently in the process of upgrading our Java applications from >> Java >> > 8 to Java 11. After deploying some of our production applications with >> Java >> > 11, we began to see the resident memory size grow without bound until >> our >> > orchestrator killed the applications for excessive memory usage. We've >> > started to debug this issue, but noticed that the NMT output appears to >> be >> > incorrect. In particular the Compiler section is displaying >> > >> > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) >> > (malloc=4896KB +1206KB #4132 +508) >> > (arena=18014398509481617KB -196 #5) >> > >> > Obviously the arena value here is quite wrong and there's no way the >> > reserved memory can be less than the malloc memory. Further there's >> > a 276305KB gap in the RSS size reported by our metrics and the amount of >> > memory NMT reports as committed. Here's our JVM args and JDK version, >> I've >> > additionally attached the full output of the NMT detailed diff. >> > >> > Running java11 with JVM arguments: -Djava.net.preferIPv4Stack=true >> > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m >> > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m >> > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:+UnlockExperimentalVMOptions >> > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 >> > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 >> > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem >> > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m >> > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs >> > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError >> > -XX:-OmitStackTraceInFastThrow >> > >> -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json >> > -XX:-PreferContainerQuotaForCPUCount -XX:NativeMemoryTracking=detail >> > -jar REDACTED >> > openjdk version "11.0.5" 2019-10-15 >> > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) >> > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed mode) >> > >> > From zgu at redhat.com Thu Nov 14 18:49:41 2019 From: zgu at redhat.com (Zhengyu Gu) Date: Thu, 14 Nov 2019 13:49:41 -0500 Subject: Native Memory Tracking Bug In-Reply-To: References: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> Message-ID: Hi Jacob, NMT only tracks native memory used by Hotspot JVM itself. It does not track native memory, for example, memory allocations inside jni code, data structures and memory allocations by c libraries, etc. Hope it helps. -Zhengyu On 11/14/19 1:06 PM, Jacob Schlather wrote: > Upon further investigation it appears that NMT is not accurately > tracking the memory usage of the JVM. I ran NMT summary and got the > following output > > Total: reserved=9095231KB, committed=8746919KB > > And then I ran pmap on the host to check memory usage > > sudo pmap -x 89186 | tail -n 1 > total kB ? ? ? ?34497556 9397564 9352480 > > Is this sort of gap expected? > > > On Thu, Nov 14, 2019 at 12:18 PM Jacob Schlather > wrote: > > Hi Zhengyu, > > This bug seems to happen every time for?one of our web services > running in production, but it doesn't happen in qa, so I don't think > I'll be able to provide a reproducer. If there are some lightweight > debugging options we could turn on and provide that output to you, I > could do that. > > Having dug into the memory issue we're seeing, what it looks like is > that in Java 8 the committed memory as shown by NMT is significantly > less than the memory the JVM was actually using. Now on Java 11, the > JVM seems to be using much closer to the committed memory. Do you > happen to know of anything we could look into for that? Thanks. > > On Wed, Nov 13, 2019 at 10:14 PM Zhengyu Gu > wrote: > > Hi Jacob, > > It looks like JDK-8204128 [1] strikes again. It would be very > helpful if > you can provide a reproducer. > > Thanks, > > -Zhengyu > > > [1] https://bugs.openjdk.java.net/browse/JDK-8204128 > > > On 11/13/19 9:56 PM, Jacob Schlather wrote: > > We're currently in the process of upgrading our Java > applications from Java > > 8 to Java 11. After deploying some of our production > applications with Java > > 11, we began to see the resident memory size grow without > bound until our > > orchestrator killed the applications for excessive memory > usage. We've > > started to debug this issue, but noticed that the NMT output > appears to be > > incorrect. In particular the Compiler section is displaying > > > > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) > > (malloc=4896KB +1206KB #4132 +508) > > (arena=18014398509481617KB -196 #5) > > > > Obviously the arena value here is quite wrong and there's no > way the > > reserved memory can be less than the malloc memory. Further > there's > > a 276305KB gap in the RSS size reported by our metrics and > the amount of > > memory NMT reports as committed. Here's our JVM args and JDK > version, I've > > additionally attached the full output of the NMT detailed diff. > > > > Running java11 with JVM arguments: > -Djava.net.preferIPv4Stack=true > > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m > > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m > > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 > -XX:+UnlockExperimentalVMOptions > > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 > > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 > > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem > > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m > > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs > > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError > > -XX:-OmitStackTraceInFastThrow > > > -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json > > -XX:-PreferContainerQuotaForCPUCount > -XX:NativeMemoryTracking=detail > > -jar REDACTED > > openjdk version "11.0.5" 2019-10-15 > > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) > > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed > mode) > > > From zgu at redhat.com Thu Nov 14 20:33:16 2019 From: zgu at redhat.com (Zhengyu Gu) Date: Thu, 14 Nov 2019 15:33:16 -0500 Subject: Native Memory Tracking Bug In-Reply-To: References: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> Message-ID: <86a20849-b461-6e17-875d-a4f3ed725438@redhat.com> On 11/14/19 1:06 PM, Jacob Schlather wrote: > Upon further investigation it appears that NMT is not accurately > tracking the memory usage of the JVM. I ran NMT summary and got the > following output > > Total: reserved=9095231KB, committed=8746919KB > > And then I ran pmap on the host to check memory usage > > sudo pmap -x 89186 | tail -n 1 > total kB ? ? ? ?34497556 9397564 9352480 > > Is this sort of gap expected? Sorry, I was preoccupied by tracking bug. As the memory gap, it really depends on individual application. For example, if it has jni methods that allocate a lot memory outside of JVM? How many sockets it opens, given each socket may have buffer(s) associating with it, etc. The NMT output, you posted on github, shows 700+ threads, each thread may have per-thread malloc pool, that is managed by c library and not tracked by NMT, that can add up significant amount of memory. -Zhengyu > > > On Thu, Nov 14, 2019 at 12:18 PM Jacob Schlather > wrote: > > Hi Zhengyu, > > This bug seems to happen every time for?one of our web services > running in production, but it doesn't happen in qa, so I don't think > I'll be able to provide a reproducer. If there are some lightweight > debugging options we could turn on and provide that output to you, I > could do that. > > Having dug into the memory issue we're seeing, what it looks like is > that in Java 8 the committed memory as shown by NMT is significantly > less than the memory the JVM was actually using. Now on Java 11, the > JVM seems to be using much closer to the committed memory. Do you > happen to know of anything we could look into for that? Thanks. > > On Wed, Nov 13, 2019 at 10:14 PM Zhengyu Gu > wrote: > > Hi Jacob, > > It looks like JDK-8204128 [1] strikes again. It would be very > helpful if > you can provide a reproducer. > > Thanks, > > -Zhengyu > > > [1] https://bugs.openjdk.java.net/browse/JDK-8204128 > > > On 11/13/19 9:56 PM, Jacob Schlather wrote: > > We're currently in the process of upgrading our Java > applications from Java > > 8 to Java 11. After deploying some of our production > applications with Java > > 11, we began to see the resident memory size grow without > bound until our > > orchestrator killed the applications for excessive memory > usage. We've > > started to debug this issue, but noticed that the NMT output > appears to be > > incorrect. In particular the Compiler section is displaying > > > > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) > > (malloc=4896KB +1206KB #4132 +508) > > (arena=18014398509481617KB -196 #5) > > > > Obviously the arena value here is quite wrong and there's no > way the > > reserved memory can be less than the malloc memory. Further > there's > > a 276305KB gap in the RSS size reported by our metrics and > the amount of > > memory NMT reports as committed. Here's our JVM args and JDK > version, I've > > additionally attached the full output of the NMT detailed diff. > > > > Running java11 with JVM arguments: > -Djava.net.preferIPv4Stack=true > > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m > > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m > > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 > -XX:+UnlockExperimentalVMOptions > > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 > > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 > > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem > > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m > > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs > > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError > > -XX:-OmitStackTraceInFastThrow > > > -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json > > -XX:-PreferContainerQuotaForCPUCount > -XX:NativeMemoryTracking=detail > > -jar REDACTED > > openjdk version "11.0.5" 2019-10-15 > > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) > > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed > mode) > > > From jschlather at hubspot.com Thu Nov 14 20:40:57 2019 From: jschlather at hubspot.com (Jacob Schlather) Date: Thu, 14 Nov 2019 15:40:57 -0500 Subject: Native Memory Tracking Bug In-Reply-To: <86a20849-b461-6e17-875d-a4f3ed725438@redhat.com> References: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> <86a20849-b461-6e17-875d-a4f3ed725438@redhat.com> Message-ID: Hi Zhengyu, For another web service also seeing this issue, I ran NMT summary on java 8 vs java 11. As you can see our RSS increases by about 1gb when we run on Java 11. We don't make use of any JNI apis that I'm aware of, but we do have three shaded versions of netty and open many sockets. Has the memory behavior of any of the things you mentioned changed greatly in between jdk8 and jdk11? I do notice that g1gc has undergone substantial work and we've tuned that heavily. I'm going to run a test tomorrow morning removing our g1gc tuning and see how that impacts memory usage on java 11. Java 8 NMT Committed: 3560392KB Actual Memory Usage: 3177185KB Java 11 NMT Committed: 3436458KB Actual Memory Usage: 4320133 KB On Thu, Nov 14, 2019 at 3:33 PM Zhengyu Gu wrote: > > > On 11/14/19 1:06 PM, Jacob Schlather wrote: > > Upon further investigation it appears that NMT is not accurately > > tracking the memory usage of the JVM. I ran NMT summary and got the > > following output > > > > Total: reserved=9095231KB, committed=8746919KB > > > > And then I ran pmap on the host to check memory usage > > > > sudo pmap -x 89186 | tail -n 1 > > total kB 34497556 9397564 9352480 > > > > Is this sort of gap expected? > > Sorry, I was preoccupied by tracking bug. > > As the memory gap, it really depends on individual application. > > For example, if it has jni methods that allocate a lot memory outside of > JVM? How many sockets it opens, given each socket may have buffer(s) > associating with it, etc. > > The NMT output, you posted on github, shows 700+ threads, each thread > may have per-thread malloc pool, that is managed by c library and not > tracked by NMT, that can add up significant amount of memory. > > -Zhengyu > > > > > > > > > > > > > On Thu, Nov 14, 2019 at 12:18 PM Jacob Schlather > > wrote: > > > > Hi Zhengyu, > > > > This bug seems to happen every time for one of our web services > > running in production, but it doesn't happen in qa, so I don't think > > I'll be able to provide a reproducer. If there are some lightweight > > debugging options we could turn on and provide that output to you, I > > could do that. > > > > Having dug into the memory issue we're seeing, what it looks like is > > that in Java 8 the committed memory as shown by NMT is significantly > > less than the memory the JVM was actually using. Now on Java 11, the > > JVM seems to be using much closer to the committed memory. Do you > > happen to know of anything we could look into for that? Thanks. > > > > On Wed, Nov 13, 2019 at 10:14 PM Zhengyu Gu > > wrote: > > > > Hi Jacob, > > > > It looks like JDK-8204128 [1] strikes again. It would be very > > helpful if > > you can provide a reproducer. > > > > Thanks, > > > > -Zhengyu > > > > > > [1] https://bugs.openjdk.java.net/browse/JDK-8204128 > > > > > > > On 11/13/19 9:56 PM, Jacob Schlather wrote: > > > We're currently in the process of upgrading our Java > > applications from Java > > > 8 to Java 11. After deploying some of our production > > applications with Java > > > 11, we began to see the resident memory size grow without > > bound until our > > > orchestrator killed the applications for excessive memory > > usage. We've > > > started to debug this issue, but noticed that the NMT output > > appears to be > > > incorrect. In particular the Compiler section is displaying > > > > > > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) > > > (malloc=4896KB +1206KB #4132 +508) > > > (arena=18014398509481617KB -196 #5) > > > > > > Obviously the arena value here is quite wrong and there's no > > way the > > > reserved memory can be less than the malloc memory. Further > > there's > > > a 276305KB gap in the RSS size reported by our metrics and > > the amount of > > > memory NMT reports as committed. Here's our JVM args and JDK > > version, I've > > > additionally attached the full output of the NMT detailed diff. > > > > > > Running java11 with JVM arguments: > > -Djava.net.preferIPv4Stack=true > > > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m > > > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m > > > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 > > -XX:+UnlockExperimentalVMOptions > > > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 > > > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 > > > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem > > > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m > > > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs > > > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError > > > -XX:-OmitStackTraceInFastThrow > > > > > > -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json > > > -XX:-PreferContainerQuotaForCPUCount > > -XX:NativeMemoryTracking=detail > > > -jar REDACTED > > > openjdk version "11.0.5" 2019-10-15 > > > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) > > > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed > > mode) > > > > > > From zgu at redhat.com Thu Nov 14 21:38:28 2019 From: zgu at redhat.com (Zhengyu Gu) Date: Thu, 14 Nov 2019 16:38:28 -0500 Subject: Native Memory Tracking Bug In-Reply-To: References: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> <86a20849-b461-6e17-875d-a4f3ed725438@redhat.com> Message-ID: <115e85d9-59f0-2280-9e32-9900f361f81a@redhat.com> On 11/14/19 3:40 PM, Jacob Schlather wrote: > Hi Zhengyu, > > For another web service also seeing this issue, So, this is not an intermittent issue. I mean if you see a bad compiler arena size, subsequent queries also return bad compiler arena sizes, right? Thanks, -Zhengyu I ran NMT summary on > java 8 vs java 11. As you can see our RSS increases by about 1gb when we > run on Java 11. We don't make use of any JNI apis that I'm aware of, but > we do have three shaded versions of netty and open many sockets. Has the > memory behavior of any of the things you mentioned changed greatly in > between jdk8 and jdk11? I do notice that g1gc has undergone substantial > work and we've tuned that heavily. I'm going to run a test tomorrow > morning removing our g1gc tuning and see how that impacts memory usage > on java 11. > > Java 8 > NMT Committed:?3560392KB > Actual Memory Usage:?3177185KB > > Java 11 > NMT Committed:?3436458KB > Actual Memory Usage: 4320133 KB > > On Thu, Nov 14, 2019 at 3:33 PM Zhengyu Gu > wrote: > > > > On 11/14/19 1:06 PM, Jacob Schlather wrote: > > Upon further investigation it appears that NMT is not accurately > > tracking the memory usage of the JVM. I ran NMT summary and got the > > following output > > > > Total: reserved=9095231KB, committed=8746919KB > > > > And then I ran pmap on the host to check memory usage > > > > sudo pmap -x 89186 | tail -n 1 > > total kB ? ? ? ?34497556 9397564 9352480 > > > > Is this sort of gap expected? > > Sorry, I was preoccupied by tracking bug. > > As the memory gap, it really depends on individual application. > > For example, if it has jni methods that allocate a lot memory > outside of > JVM? How many sockets it opens, given each socket may have buffer(s) > associating with it, etc. > > The NMT output, you posted on github, shows 700+ threads, each thread > may have per-thread malloc pool, that is managed by c library and not > tracked by NMT, that can add up significant amount of memory. > > -Zhengyu > > > > > > > > > > > > > On Thu, Nov 14, 2019 at 12:18 PM Jacob Schlather > > > >> > wrote: > > > > Hi Zhengyu, > > > > This bug seems to happen every time for?one of our web services > > running in production, but it doesn't happen in qa, so I don't think > > I'll be able to provide a reproducer. If there are some lightweight > > debugging options we could turn on and provide that output to you, I > > could do that. > > > > Having dug into the memory issue we're seeing, what it looks like is > > that in Java 8 the committed memory as shown by NMT is significantly > > less than the memory the JVM was actually using. Now on Java 11, the > > JVM seems to be using much closer to the committed memory. Do you > > happen to know of anything we could look into for that? Thanks. > > > > On Wed, Nov 13, 2019 at 10:14 PM Zhengyu Gu > > >> wrote: > > > > Hi Jacob, > > > > It looks like JDK-8204128 [1] strikes again. It would be very > > helpful if > > you can provide a reproducer. > > > > Thanks, > > > > -Zhengyu > > > > > > [1] https://bugs.openjdk.java.net/browse/JDK-8204128 > > > > > > On 11/13/19 9:56 PM, Jacob Schlather wrote: > > > We're currently in the process of upgrading our Java > > applications from Java > > > 8 to Java 11. After deploying some of our production > > applications with Java > > > 11, we began to see the resident memory size grow without > > bound until our > > > orchestrator killed the applications for excessive memory > > usage. We've > > > started to debug this issue, but noticed that the NMT output > > appears to be > > > incorrect. In particular the Compiler section is displaying > > > > > > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) > > > (malloc=4896KB +1206KB #4132 +508) > > > (arena=18014398509481617KB -196 #5) > > > > > > Obviously the arena value here is quite wrong and there's no > > way the > > > reserved memory can be less than the malloc memory. Further > > there's > > > a 276305KB gap in the RSS size reported by our metrics and > > the amount of > > > memory NMT reports as committed. Here's our JVM args and JDK > > version, I've > > > additionally attached the full output of the NMT detailed diff. > > > > > > Running java11 with JVM arguments: > > -Djava.net.preferIPv4Stack=true > > > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m > > > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m > > > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 > > -XX:+UnlockExperimentalVMOptions > > > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 > > > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 > > > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem > > > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m > > > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs > > > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError > > > -XX:-OmitStackTraceInFastThrow > > > > > > -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json > > > -XX:-PreferContainerQuotaForCPUCount > > -XX:NativeMemoryTracking=detail > > > -jar REDACTED > > > openjdk version "11.0.5" 2019-10-15 > > > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) > > > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed > > mode) > > > > > > From zgu at redhat.com Thu Nov 14 21:54:08 2019 From: zgu at redhat.com (Zhengyu Gu) Date: Thu, 14 Nov 2019 16:54:08 -0500 Subject: Native Memory Tracking Bug In-Reply-To: References: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> <86a20849-b461-6e17-875d-a4f3ed725438@redhat.com> Message-ID: <3c59a617-322e-6a0b-b08c-f7cbea076a9d@redhat.com> Hi Jacob, On 11/14/19 3:40 PM, Jacob Schlather wrote: > Hi Zhengyu, > > For another web service also seeing this issue, I ran NMT summary on > java 8 vs java 11. As you can see our RSS increases by about 1gb when we > run on Java 11. We don't make use of any JNI apis that I'm aware of, but > we do have three shaded versions of netty and open many sockets. Has the > memory behavior of any of the things you mentioned changed greatly in > between jdk8 and jdk11? I do notice that g1gc has undergone substantial > work and we've tuned that heavily. I'm going to run a test tomorrow > morning removing our g1gc tuning and see how that impacts memory usage > on java 11. > > Java 8 > NMT Committed:?3560392KB > Actual Memory Usage:?3177185KB > > Java 11 > NMT Committed:?3436458KB > Actual Memory Usage: 4320133 KB In case of g1gc changes, the memory usage should reflect in NMT outputs, which is not the case. Likely to be something outside of JVM. -Zhengyu > > On Thu, Nov 14, 2019 at 3:33 PM Zhengyu Gu > wrote: > > > > On 11/14/19 1:06 PM, Jacob Schlather wrote: > > Upon further investigation it appears that NMT is not accurately > > tracking the memory usage of the JVM. I ran NMT summary and got the > > following output > > > > Total: reserved=9095231KB, committed=8746919KB > > > > And then I ran pmap on the host to check memory usage > > > > sudo pmap -x 89186 | tail -n 1 > > total kB ? ? ? ?34497556 9397564 9352480 > > > > Is this sort of gap expected? > > Sorry, I was preoccupied by tracking bug. > > As the memory gap, it really depends on individual application. > > For example, if it has jni methods that allocate a lot memory > outside of > JVM? How many sockets it opens, given each socket may have buffer(s) > associating with it, etc. > > The NMT output, you posted on github, shows 700+ threads, each thread > may have per-thread malloc pool, that is managed by c library and not > tracked by NMT, that can add up significant amount of memory. > > -Zhengyu > > > > > > > > > > > > > On Thu, Nov 14, 2019 at 12:18 PM Jacob Schlather > > > >> > wrote: > > > > Hi Zhengyu, > > > > This bug seems to happen every time for?one of our web services > > running in production, but it doesn't happen in qa, so I don't think > > I'll be able to provide a reproducer. If there are some lightweight > > debugging options we could turn on and provide that output to you, I > > could do that. > > > > Having dug into the memory issue we're seeing, what it looks like is > > that in Java 8 the committed memory as shown by NMT is significantly > > less than the memory the JVM was actually using. Now on Java 11, the > > JVM seems to be using much closer to the committed memory. Do you > > happen to know of anything we could look into for that? Thanks. > > > > On Wed, Nov 13, 2019 at 10:14 PM Zhengyu Gu > > >> wrote: > > > > Hi Jacob, > > > > It looks like JDK-8204128 [1] strikes again. It would be very > > helpful if > > you can provide a reproducer. > > > > Thanks, > > > > -Zhengyu > > > > > > [1] https://bugs.openjdk.java.net/browse/JDK-8204128 > > > > > > On 11/13/19 9:56 PM, Jacob Schlather wrote: > > > We're currently in the process of upgrading our Java > > applications from Java > > > 8 to Java 11. After deploying some of our production > > applications with Java > > > 11, we began to see the resident memory size grow without > > bound until our > > > orchestrator killed the applications for excessive memory > > usage. We've > > > started to debug this issue, but noticed that the NMT output > > appears to be > > > incorrect. In particular the Compiler section is displaying > > > > > > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) > > > (malloc=4896KB +1206KB #4132 +508) > > > (arena=18014398509481617KB -196 #5) > > > > > > Obviously the arena value here is quite wrong and there's no > > way the > > > reserved memory can be less than the malloc memory. Further > > there's > > > a 276305KB gap in the RSS size reported by our metrics and > > the amount of > > > memory NMT reports as committed. Here's our JVM args and JDK > > version, I've > > > additionally attached the full output of the NMT detailed diff. > > > > > > Running java11 with JVM arguments: > > -Djava.net.preferIPv4Stack=true > > > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m > > > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m > > > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 > > -XX:+UnlockExperimentalVMOptions > > > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 > > > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 > > > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem > > > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m > > > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs > > > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError > > > -XX:-OmitStackTraceInFastThrow > > > > > > -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json > > > -XX:-PreferContainerQuotaForCPUCount > > -XX:NativeMemoryTracking=detail > > > -jar REDACTED > > > openjdk version "11.0.5" 2019-10-15 > > > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) > > > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed > > mode) > > > > > > From david.holmes at oracle.com Thu Nov 14 22:04:06 2019 From: david.holmes at oracle.com (David Holmes) Date: Fri, 15 Nov 2019 08:04:06 +1000 Subject: RFR (M) 8223261 "JDK-8189208 followup - remove JDK_GetVersionInfo0 and the supporting code" In-Reply-To: <306d4073-7d64-d807-b670-52126462e81c@oracle.com> References: <4623455a-2d90-e697-89b3-830d5becf8f9@oracle.com> <306d4073-7d64-d807-b670-52126462e81c@oracle.com> Message-ID: <1fc99211-7b79-cc8c-b54f-b7de27294a95@oracle.com> On 15/11/2019 1:13 am, gerard ziemski wrote: > Thank you for the review. > > I'm definitively going to wait for Christoph to check in his fix first. > > I tried in fact to leave jdk_util.c/.h files in empty without removing > them from the repo, and even though Mac/Linux were OK with that, > Solaris/Windows were not. The .c file can go. The .h file wouldn't be empty as it still has the other includes. David > > cheers > > On 11/13/19 9:21 PM, David Holmes wrote: >> Hi Gerard, >> >> On 14/11/2019 6:05 am, Langer, Christoph wrote: >>> Hi Gerard, >>> >>> generally it looks like a nice cleanup. >>> >>> I've got a patch prepared though, which I was planning on posting >>> tomorrow. It is about cleanup for the canonicalize function in >>> libjava. I wanted to use jdk_util.h for the function prototype. I had >>> not yet filed a bug but here is what I have: >>> http://cr.openjdk.java.net/~clanger/webrevs/cleanup-canonicalize/ >>> >>> So maybe you could refrain from removing jdk_util.h or maybe you can >>> hold off submitting your change until my cleanup is reviewed? >> >> I'd also suggest not deleting jdk_util.h. It seems very odd to me to >> have jdk_util_md.h with no shared jdk_util.h. If you keep jdk_util.h >> then you don't need to touch a number of the files. >> >> Otherwise this looks like a good cleanup. >> >> Thanks, >> David >> ----- >> >>> I'll create a bug and post an official review thread tomorrow... >>> >>> Thanks >>> Christoph >>> >>>> -----Original Message----- >>>> From: hotspot-dev On Behalf Of >>>> Mandy Chung >>>> Sent: Mittwoch, 13. November 2019 20:30 >>>> To: gerard ziemski >>>> Cc: awt-dev at openjdk.java.net; hotspot-dev developers >>> dev at openjdk.java.net>; core-libs-dev at openjdk.java.net >>>> Subject: Re: RFR (M) 8223261 "JDK-8189208 followup - remove >>>> JDK_GetVersionInfo0 and the supporting code" >>>> >>>> >>>> >>>> On 11/13/19 10:50 AM, gerard ziemski wrote: >>>>> Hi all, >>>>> >>>>> Please review this cleanup, where we remove JDK_GetVersionInfo0 and >>>>> related code, since we can get build versions directly from within the >>>>> VM itself: >>>>> >>>>> I'm including core-libs and awt in this review because the proposed >>>>> fix touches their corresponding files. >>>>> >>>>> >>>>> bug: https://bugs.openjdk.java.net/browse/JDK-8223261 >>>>> webrev: http://cr.openjdk.java.net/~gziemski/8223261_rev1 >>>>> tests: passes Mach5 tier1,2,3,4,5,6 >>>>> >>>> >>>> This is a good clean up.? JDK_GetVersionInfo0 was needed long time ago >>>> in particular in HS express support that is no longer applicable. >>>> >>>> One leftover comment should also be removed. >>>> >>>> src/hotspot/share/runtime/vm_version.hpp >>>> ? ? // Gets the jvm_version_info.jvm_version defined in jvm.h >>>> >>>> otherwise looks good. >>>> >>>> Mandy > From kim.barrett at oracle.com Thu Nov 14 22:52:00 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 14 Nov 2019 17:52:00 -0500 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range In-Reply-To: <290c83af-f554-9236-4f85-740736374e8d@oracle.com> References: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> <8ced83cd-3374-4517-21b7-8f6401c1c81e@oracle.com> <290c83af-f554-9236-4f85-740736374e8d@oracle.com> Message-ID: <832C5850-7B54-43CB-A688-BEB1EAFDA437@oracle.com> > On Nov 14, 2019, at 10:58 AM, Thomas Schatzl wrote: > New webrevs: > > http://cr.openjdk.java.net/~tschatzl/8233702/webrev.1_to_2/ (diff) > http://cr.openjdk.java.net/~tschatzl/8233702/webrev.2/ (full) > > Passed hs-tier1-5. > > Thanks, > Thomas Looks good. From jschlather at hubspot.com Thu Nov 14 23:22:33 2019 From: jschlather at hubspot.com (Jacob Schlather) Date: Thu, 14 Nov 2019 18:22:33 -0500 Subject: Native Memory Tracking Bug In-Reply-To: <3c59a617-322e-6a0b-b08c-f7cbea076a9d@redhat.com> References: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> <86a20849-b461-6e17-875d-a4f3ed725438@redhat.com> <3c59a617-322e-6a0b-b08c-f7cbea076a9d@redhat.com> Message-ID: Okay, I'll continue digging into our memory profile. Regarding the NMT bug, yes I have an application in production which so far on every instance, everytime I take an NMT summary it produces the bug. But it doesn't happen in QA, so I can't reproduce in a test. Here's another example I just took. Compiler (reserved=6022KB, committed=6022KB) (malloc=6628KB #5024) (arena=18014398509481378KB #5) On Thu, Nov 14, 2019 at 4:54 PM Zhengyu Gu wrote: > Hi Jacob, > > On 11/14/19 3:40 PM, Jacob Schlather wrote: > > Hi Zhengyu, > > > > For another web service also seeing this issue, I ran NMT summary on > > java 8 vs java 11. As you can see our RSS increases by about 1gb when we > > run on Java 11. We don't make use of any JNI apis that I'm aware of, but > > we do have three shaded versions of netty and open many sockets. Has the > > memory behavior of any of the things you mentioned changed greatly in > > between jdk8 and jdk11? I do notice that g1gc has undergone substantial > > work and we've tuned that heavily. I'm going to run a test tomorrow > > morning removing our g1gc tuning and see how that impacts memory usage > > on java 11. > > > > Java 8 > > NMT Committed: 3560392KB > > Actual Memory Usage: 3177185KB > > > > Java 11 > > NMT Committed: 3436458KB > > Actual Memory Usage: 4320133 KB > > In case of g1gc changes, the memory usage should reflect in NMT outputs, > which is not the case. Likely to be something outside of JVM. > > -Zhengyu > > > > > > On Thu, Nov 14, 2019 at 3:33 PM Zhengyu Gu > > wrote: > > > > > > > > On 11/14/19 1:06 PM, Jacob Schlather wrote: > > > Upon further investigation it appears that NMT is not accurately > > > tracking the memory usage of the JVM. I ran NMT summary and got the > > > following output > > > > > > Total: reserved=9095231KB, committed=8746919KB > > > > > > And then I ran pmap on the host to check memory usage > > > > > > sudo pmap -x 89186 | tail -n 1 > > > total kB 34497556 9397564 9352480 > > > > > > Is this sort of gap expected? > > > > Sorry, I was preoccupied by tracking bug. > > > > As the memory gap, it really depends on individual application. > > > > For example, if it has jni methods that allocate a lot memory > > outside of > > JVM? How many sockets it opens, given each socket may have buffer(s) > > associating with it, etc. > > > > The NMT output, you posted on github, shows 700+ threads, each thread > > may have per-thread malloc pool, that is managed by c library and not > > tracked by NMT, that can add up significant amount of memory. > > > > -Zhengyu > > > > > > > > > > > > > > > > > > > > > > > On Thu, Nov 14, 2019 at 12:18 PM Jacob Schlather > > > > > >> > > wrote: > > > > > > Hi Zhengyu, > > > > > > This bug seems to happen every time for one of our web services > > > running in production, but it doesn't happen in qa, so I don't think > > > I'll be able to provide a reproducer. If there are some lightweight > > > debugging options we could turn on and provide that output to you, I > > > could do that. > > > > > > Having dug into the memory issue we're seeing, what it looks like is > > > that in Java 8 the committed memory as shown by NMT is significantly > > > less than the memory the JVM was actually using. Now on Java 11, the > > > JVM seems to be using much closer to the committed memory. Do you > > > happen to know of anything we could look into for that? Thanks. > > > > > > On Wed, Nov 13, 2019 at 10:14 PM Zhengyu Gu > > > > >> wrote: > > > > > > Hi Jacob, > > > > > > It looks like JDK-8204128 [1] strikes again. It would be very > > > helpful if > > > you can provide a reproducer. > > > > > > Thanks, > > > > > > -Zhengyu > > > > > > > > > [1] https://bugs.openjdk.java.net/browse/JDK-8204128 > > > > > > > > > > On 11/13/19 9:56 PM, Jacob Schlather wrote: > > > > We're currently in the process of upgrading our Java > > > applications from Java > > > > 8 to Java 11. After deploying some of our production > > > applications with Java > > > > 11, we began to see the resident memory size grow without > > > bound until our > > > > orchestrator killed the applications for excessive memory > > > usage. We've > > > > started to debug this issue, but noticed that the NMT output > > > appears to be > > > > incorrect. In particular the Compiler section is displaying > > > > > > > > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) > > > > (malloc=4896KB +1206KB #4132 +508) > > > > (arena=18014398509481617KB -196 #5) > > > > > > > > Obviously the arena value here is quite wrong and there's no > > > way the > > > > reserved memory can be less than the malloc memory. Further > > > there's > > > > a 276305KB gap in the RSS size reported by our metrics and > > > the amount of > > > > memory NMT reports as committed. Here's our JVM args and JDK > > > version, I've > > > > additionally attached the full output of the NMT detailed diff. > > > > > > > > Running java11 with JVM arguments: > > > -Djava.net.preferIPv4Stack=true > > > > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m > > > > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m > > > > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 > > > -XX:+UnlockExperimentalVMOptions > > > > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 > > > > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 > > > > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem > > > > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m > > > > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs > > > > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError > > > > -XX:-OmitStackTraceInFastThrow > > > > > > > > > > -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json > > > > -XX:-PreferContainerQuotaForCPUCount > > > -XX:NativeMemoryTracking=detail > > > > -jar REDACTED > > > > openjdk version "11.0.5" 2019-10-15 > > > > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) > > > > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed > > > mode) > > > > > > > > > > From brent.christian at oracle.com Thu Nov 14 23:58:11 2019 From: brent.christian at oracle.com (Brent Christian) Date: Thu, 14 Nov 2019 15:58:11 -0800 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> Message-ID: <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> On 11/14/19 8:22 AM, Mandy Chung wrote: > On 11/13/19 10:37 AM, Brent Christian wrote: > > The spec change looks fine. OK, thanks. > As for the test, I expect that it simply calls Class.forName("Provider", > false, ucl) and then should succeed. > > Then calling Class.forName("Provider", true, ucl) should fail with an > error (I think it's EIIE with NCDFE?).? This way it verifies that > initialization/linking does cause NCDFE during verification while > Class.forName does not do linking if initialize=false. Yes, that works well, thanks for the idea (plus I can do it with one fewer class): http://cr.openjdk.java.net/~bchristi/8233272/webrev-03/ -Brent From david.holmes at oracle.com Fri Nov 15 00:12:31 2019 From: david.holmes at oracle.com (David Holmes) Date: Fri, 15 Nov 2019 10:12:31 +1000 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> Message-ID: Hi Brent, On 15/11/2019 9:58 am, Brent Christian wrote: > On 11/14/19 8:22 AM, Mandy Chung wrote: >> On 11/13/19 10:37 AM, Brent Christian wrote: >> >> The spec change looks fine. > > OK, thanks. +1 from me on spec changes. > >> As for the test, I expect that it simply calls >> Class.forName("Provider", false, ucl) and then should succeed. >> >> Then calling Class.forName("Provider", true, ucl) should fail with an >> error (I think it's EIIE with NCDFE?).? This way it verifies that >> initialization/linking does cause NCDFE during verification while >> Class.forName does not do linking if initialize=false. > > Yes, that works well, thanks for the idea (plus I can do it with one > fewer class): > > http://cr.openjdk.java.net/~bchristi/8233272/webrev-03/ Test is fine. Just one note/clarification: 63 // Loading (but not linking) Container will succeed. Container was already loaded as part of the failing forName call, so this second forName will just return it. Thanks, David ----- > -Brent From brent.christian at oracle.com Fri Nov 15 00:33:51 2019 From: brent.christian at oracle.com (Brent Christian) Date: Thu, 14 Nov 2019 16:33:51 -0800 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> Message-ID: <99dded93-17d2-ea40-deef-56efd10218c7@oracle.com> On 11/14/19 4:12 PM, David Holmes wrote: > On 15/11/2019 9:58 am, Brent Christian wrote: >> >> http://cr.openjdk.java.net/~bchristi/8233272/webrev-03/ > > Test is fine. Just one note/clarification: > > ?63???????? // Loading (but not linking) Container will succeed. > > Container was already loaded as part of the failing forName call, so > this second forName will just return it. Hmm. I could use a different classloader instance for the second Class.forName() call. (The test does fail as expected using a build with 8212117 but without 8233091.) -Brent From david.holmes at oracle.com Fri Nov 15 00:42:04 2019 From: david.holmes at oracle.com (David Holmes) Date: Fri, 15 Nov 2019 10:42:04 +1000 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: <99dded93-17d2-ea40-deef-56efd10218c7@oracle.com> References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> <99dded93-17d2-ea40-deef-56efd10218c7@oracle.com> Message-ID: On 15/11/2019 10:33 am, Brent Christian wrote: > On 11/14/19 4:12 PM, David Holmes wrote: >> On 15/11/2019 9:58 am, Brent Christian wrote: >>> >>> http://cr.openjdk.java.net/~bchristi/8233272/webrev-03/ >> >> Test is fine. Just one note/clarification: >> >> ??63???????? // Loading (but not linking) Container will succeed. >> >> Container was already loaded as part of the failing forName call, so >> this second forName will just return it. > > Hmm.? I could use a different classloader instance for the second > Class.forName() call. If you really want to test both positive and negative cases from a clean slate then I would suggest modifying the test slightly and using two @run commands - one to try to initialize and one to not. Cheers, David > (The test does fail as expected using a build with 8212117 but without > 8233091.) > > -Brent From mandy.chung at oracle.com Fri Nov 15 00:46:18 2019 From: mandy.chung at oracle.com (Mandy Chung) Date: Thu, 14 Nov 2019 16:46:18 -0800 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> <99dded93-17d2-ea40-deef-56efd10218c7@oracle.com> Message-ID: <1e4cd61f-dddd-b221-f0b4-448e64a4b440@oracle.com> On 11/14/19 4:42 PM, David Holmes wrote: > On 15/11/2019 10:33 am, Brent Christian wrote: >> On 11/14/19 4:12 PM, David Holmes wrote: >>> On 15/11/2019 9:58 am, Brent Christian wrote: >>>> >>>> http://cr.openjdk.java.net/~bchristi/8233272/webrev-03/ >>> >>> Test is fine. Just one note/clarification: >>> >>> ??63???????? // Loading (but not linking) Container will succeed. >>> >>> Container was already loaded as part of the failing forName call, so >>> this second forName will just return it. >> >> Hmm.? I could use a different classloader instance for the second >> Class.forName() call. > > If you really want to test both positive and negative cases from a > clean slate then I would suggest modifying the test slightly and using > two @run commands - one to try to initialize and one to not. Yes this is what I was thinking.? Two separate @run commands with an argument to indicate if initialize is true or false would do it. Mandy From andrei.pangin at gmail.com Fri Nov 15 01:12:42 2019 From: andrei.pangin at gmail.com (Andrei Pangin) Date: Fri, 15 Nov 2019 04:12:42 +0300 Subject: Native Memory Tracking Bug In-Reply-To: References: <7324a11c-1a46-12d6-4a06-cba7afe2a344@redhat.com> Message-ID: Hi Jacob, It is a quite common situation when RSS of a Java process is larger than NMT committed memory. NMT does not count many things, including memory allocated by native libraries, memory mapped files, malloc fragmentation etc. I summarized some typical native memory issues along with the tools and techniques to investigate them in the following answers on Stack Overflow: https://stackoverflow.com/a/53624438/3448419 https://stackoverflow.com/a/53598622/3448419 and in the Devoxx presentation "Memory footprint of a Java process": https://www.youtube.com/watch?v=c755fFv1Rnk Regards, Andrei ??, 14 ????. 2019 ?. ? 21:07, Jacob Schlather : > Upon further investigation it appears that NMT is not accurately tracking > the memory usage of the JVM. I ran NMT summary and got the following output > > Total: reserved=9095231KB, committed=8746919KB > > And then I ran pmap on the host to check memory usage > > sudo pmap -x 89186 | tail -n 1 > total kB 34497556 9397564 9352480 > > Is this sort of gap expected? > > > On Thu, Nov 14, 2019 at 12:18 PM Jacob Schlather > wrote: > > > Hi Zhengyu, > > > > This bug seems to happen every time for one of our web services running > in > > production, but it doesn't happen in qa, so I don't think I'll be able to > > provide a reproducer. If there are some lightweight debugging options we > > could turn on and provide that output to you, I could do that. > > > > Having dug into the memory issue we're seeing, what it looks like is that > > in Java 8 the committed memory as shown by NMT is significantly less than > > the memory the JVM was actually using. Now on Java 11, the JVM seems to > be > > using much closer to the committed memory. Do you happen to know of > > anything we could look into for that? Thanks. > > > > On Wed, Nov 13, 2019 at 10:14 PM Zhengyu Gu wrote: > > > >> Hi Jacob, > >> > >> It looks like JDK-8204128 [1] strikes again. It would be very helpful if > >> you can provide a reproducer. > >> > >> Thanks, > >> > >> -Zhengyu > >> > >> > >> [1] https://bugs.openjdk.java.net/browse/JDK-8204128 > >> > >> > >> > >> On 11/13/19 9:56 PM, Jacob Schlather wrote: > >> > We're currently in the process of upgrading our Java applications from > >> Java > >> > 8 to Java 11. After deploying some of our production applications with > >> Java > >> > 11, we began to see the resident memory size grow without bound until > >> our > >> > orchestrator killed the applications for excessive memory usage. We've > >> > started to debug this issue, but noticed that the NMT output appears > to > >> be > >> > incorrect. In particular the Compiler section is displaying > >> > > >> > Compiler (reserved=4528KB +1010KB, committed=4528KB +1010KB) > >> > (malloc=4896KB +1206KB #4132 +508) > >> > (arena=18014398509481617KB -196 #5) > >> > > >> > Obviously the arena value here is quite wrong and there's no way the > >> > reserved memory can be less than the malloc memory. Further there's > >> > a 276305KB gap in the RSS size reported by our metrics and the amount > of > >> > memory NMT reports as committed. Here's our JVM args and JDK version, > >> I've > >> > additionally attached the full output of the NMT detailed diff. > >> > > >> > Running java11 with JVM arguments: -Djava.net.preferIPv4Stack=true > >> > -Xms6144m -Xmx6g -Xss256k -XX:MetaspaceSize=128m > >> > -XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=128m > >> > -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:+UnlockExperimentalVMOptions > >> > -XX:G1NewSizePercent=20 -XX:G1MaxNewSizePercent=45 > >> > -XX:ParallelGCThreads=8 -XX:ConcGCThreads=6 > >> > -XX:InitiatingHeapOccupancyPercent=35 -XX:+PerfDisableSharedMem > >> > -XX:-UseBiasedLocking -XX:G1HeapRegionSize=4m > >> > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=../logs > >> > -Djava.io.tmpdir=REDACTED -XX:+ExitOnOutOfMemoryError > >> > -XX:-OmitStackTraceInFastThrow > >> > > >> > -javaagent:/usr/local/appoptics-6.5.1/appoptics-agent.jar=config=appoptics-agent.json > >> > -XX:-PreferContainerQuotaForCPUCount -XX:NativeMemoryTracking=detail > >> > -jar REDACTED > >> > openjdk version "11.0.5" 2019-10-15 > >> > OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) > >> > OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed mode) > >> > > >> > > > From kim.barrett at oracle.com Fri Nov 15 06:17:19 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 15 Nov 2019 01:17:19 -0500 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> Message-ID: > On Nov 14, 2019, at 11:26 AM, Schmidt, Lutz wrote: > > Hi Kim, > > that wasn't straightforward. Had to adapt make/hotspot/lib/CompileJvm.gmk. Build settings like HOTSPOT_VERSION_STRING have to flow into the compile step of abstract_vm_version.cpp now. For the details, see my comments below. Ick, that's unfortunate. Almost makes me regret suggesting the new file. Oh well. I was going to suggest you should get a review from a build expert for this, but the changes are quite mechanical and obvious. > Other than that, I hope the new webrev is even closer to your dreams: > http://cr.openjdk.java.net/~lucy/webrevs/8233787.02/ I think abstract_vm_version.cpp should #include vm_version.hpp. That way, if Abstract_VM_Version provides any shared helper functions that are defined in terms VM_Version values, it can get any potentially overridden values. There currently isn't anything like that, though there could be (and perhaps should be, though currently presumably doesn't matter). See jvm_version(), and the initializers for _s_vm_release and _s_internal_vm_info_string. I think you shouldn't do anyhing about these references in this change. There are also a bunch of files with out of date copyrights. I should have mentioned that before. Other than that, this looks good. I assume you are running this through dev-submit and SAPs build farm to check various platforms. If there are any not covered by that, you should reach out to appropriate platform maintainers. From thomas.schatzl at oracle.com Fri Nov 15 09:02:32 2019 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Fri, 15 Nov 2019 10:02:32 +0100 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range In-Reply-To: <832C5850-7B54-43CB-A688-BEB1EAFDA437@oracle.com> References: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> <8ced83cd-3374-4517-21b7-8f6401c1c81e@oracle.com> <290c83af-f554-9236-4f85-740736374e8d@oracle.com> <832C5850-7B54-43CB-A688-BEB1EAFDA437@oracle.com> Message-ID: <8bf24f6a-b4e0-f3b1-10f6-19a34b9c6fc4@oracle.com> Hi Kim, On 14.11.19 23:52, Kim Barrett wrote: >> On Nov 14, 2019, at 10:58 AM, Thomas Schatzl wrote: >> New webrevs: >> >> http://cr.openjdk.java.net/~tschatzl/8233702/webrev.1_to_2/ (diff) >> http://cr.openjdk.java.net/~tschatzl/8233702/webrev.2/ (full) >> >> Passed hs-tier1-5. >> >> Thanks, >> Thomas > > Looks good. > thanks for your review. Thomas From Joshua.Zhu at arm.com Fri Nov 15 10:29:49 2019 From: Joshua.Zhu at arm.com (Joshua Zhu (Arm Technology China)) Date: Fri, 15 Nov 2019 10:29:49 +0000 Subject: 8233948: AArch64: Incorrect mapping between OptoReg and VMReg for high 64 bits of Vector Register In-Reply-To: References: Message-ID: Hi, > Please review the following patch: > JBS: https://bugs.openjdk.java.net/browse/JDK-8233948 > Webrev: http://cr.openjdk.java.net/~jzhu/8233948/webrev.00/ Please let me know if any comments. Thanks a lot. Best Regards, Joshua From lutz.schmidt at sap.com Fri Nov 15 12:19:29 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Fri, 15 Nov 2019 12:19:29 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> Message-ID: Hi Kim, thanks for reviewing - I understand your comments that way. One more review to go. :-) I made abstract_vm_version.cpp #include vm_version.hpp, and I updated the copyrights. See the webrev#03: http://cr.openjdk.java.net/~lucy/webrevs/8233787.03/ I ran the initial webrev iteration through dev-submit and had it active SAP-internally. The current webrev is active since last night SAP-internally. All builds are green. The test show only unrelated issues (some JIT compiler asserts). Of course I will run the final webrev through dev-submit. Re test coverage: we do not cover 32-bit platforms. And we do not have zero or minimal builds. Thanks, Lutz ?On 15.11.19, 07:17, "Kim Barrett" wrote: > On Nov 14, 2019, at 11:26 AM, Schmidt, Lutz wrote: > > Hi Kim, > > that wasn't straightforward. Had to adapt make/hotspot/lib/CompileJvm.gmk. Build settings like HOTSPOT_VERSION_STRING have to flow into the compile step of abstract_vm_version.cpp now. For the details, see my comments below. Ick, that's unfortunate. Almost makes me regret suggesting the new file. Oh well. I was going to suggest you should get a review from a build expert for this, but the changes are quite mechanical and obvious. > Other than that, I hope the new webrev is even closer to your dreams: > http://cr.openjdk.java.net/~lucy/webrevs/8233787.02/ I think abstract_vm_version.cpp should #include vm_version.hpp. That way, if Abstract_VM_Version provides any shared helper functions that are defined in terms VM_Version values, it can get any potentially overridden values. There currently isn't anything like that, though there could be (and perhaps should be, though currently presumably doesn't matter). See jvm_version(), and the initializers for _s_vm_release and _s_internal_vm_info_string. I think you shouldn't do anyhing about these references in this change. There are also a bunch of files with out of date copyrights. I should have mentioned that before. Other than that, this looks good. I assume you are running this through dev-submit and SAPs build farm to check various platforms. If there are any not covered by that, you should reach out to appropriate platform maintainers. From daniel.daugherty at oracle.com Fri Nov 15 15:08:47 2019 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Fri, 15 Nov 2019 10:08:47 -0500 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> Message-ID: Adding build-dev at ... since this changeset now touches make/hotspot/lib/CompileJvm.gmk. The Build team has a standing request to be included on reviews that touch makefiles... Dan On 11/14/19 11:26 AM, Schmidt, Lutz wrote: > Hi Kim, > > that wasn't straightforward. Had to adapt make/hotspot/lib/CompileJvm.gmk. Build settings like HOTSPOT_VERSION_STRING have to flow into the compile step of abstract_vm_version.cpp now. For the details, see my comments below. > > Other than that, I hope the new webrev is even closer to your dreams: > http://cr.openjdk.java.net/~lucy/webrevs/8233787.02/ > > Thanks, > Lutz > > > ?On 14.11.19, 00:34, "Kim Barrett" wrote: > > > On Nov 13, 2019, at 11:42 AM, Schmidt, Lutz wrote: > > > > Hi Kim, > > > > there is a new webrev at http://cr.openjdk.java.net/~lucy/webrevs/8233787.01/ > > > > It should be pretty close to what you view as the "right approach". There weren't too many changes relative to 8233787.00. Most files already had #include runtime/vm_version.hpp. > > This looks much better to me, but many (most?) of the changed > #includes need to be moved into sort order. > > R: tried my best to fix the sort order. Sorry for not paying attention in the first place. > > ------------------------------------------------------------------------------ > src/hotspot/share/runtime/vm_version.cpp > > Abstract_VM_Version definitions should be moved to abstract_vm_version.cpp. > Maybe just rename the file; I think the only thing that would be left > for vm_version.cpp would be VM_Version_init(). But maybe that should > be left behind in vm_version.cpp? Though that makes the review messier. > > R: Everything moved as you suggested. Doesn't make sense to have Abstract_VM_Version:: methods in vm_version.cpp file. > > ------------------------------------------------------------------------------ > src/hotspot/share/runtime/abstract_vm_version.hpp > > Should #include globalDefinitions.hpp. > - uint64_t features() > - #define SUPPORTS_NATIVE_CX8 > > R: Done. > > Should forward-declare class outsputStream. > - print_virtualization_info > - print_matching_lines_from_file (I wonder why this is *here*, but not your problem) > > R: Done. > > ------------------------------------------------------------------------------ > > > > From coleen.phillimore at oracle.com Fri Nov 15 15:14:26 2019 From: coleen.phillimore at oracle.com (coleen.phillimore at oracle.com) Date: Fri, 15 Nov 2019 10:14:26 -0500 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> Message-ID: On 11/11/19 11:56 AM, Schmidt, Lutz wrote: > Oh, oh, > looks like I stepped into a beehive... Found JDK-8202579 and JDK-8145956 talking about the unwanted use of Abstract_VM_Version. > > My intended change would not tackle that "mess", as you call it. That's too bad.? Are you sure you don't want to tackle it?? Please ... ? Coleen > But it would make potential future cleanups a little bit easier by ensuring all of hotspot code only includes vm_version.hpp. I'm in the process of modifying my initial change to reflect Kim's suggestions. I'll send it out Tuesday (hopefully), Wednesday the latest. > > Regards, > Lutz > > ?On 11.11.19, 11:54, "David Holmes" wrote: > > Also note we have an open RFE to try and fix the VM_Version vs > Abstract_VM_version mess. But it's such a mess it keeps getting deferred. > > David > > On 9/11/2019 11:58 am, Kim Barrett wrote: > >> On Nov 7, 2019, at 10:59 AM, Schmidt, Lutz wrote: > >> > >> Dear all, > >> > >> may I please request reviews for this cleanup? It's a lot of files with just some #include statement changes. That makes the review process tedious and not very challenging intellectually. > >> > >> Anyway, your effort is very much appreciated! > >> > >> jdk/submit results pending. > >> > >> Bug: https://bugs.openjdk.java.net/browse/JDK-8233787 > >> Webrev: http://cr.openjdk.java.net/~lucy/webrevs/8233787.00/ > >> > >> Thank you! > >> Lutz > > > > I don't think this is the right approach. It makes all the > > vm_version_.hpp files not be stand alone, which I think is not a > > good idea. > > > > I thik the real problem is that Abstract_VM_Version is declared in > > vm_version.hpp. I think that file should be split into > > abstract_vm_version.hpp (with most of what's currently in > > vm_version.hpp), with vm_version.hpp being just (untested) > > > > > > #ifndef SHARE_RUNTIME_VM_VERSION_HPP > > #define SHARE_RUNTIME_VM_VERSION_HPP > > > > #include "utilities/macros.hpp" > > #include CPU_HEADER(vm_version) > > > > #endif // SHARE_RUNTIME_VM_VERSION_HPP > > > > > > Change all the vm_version_.hpp files #include > > abstract_vm_version.hpp rather than vm_version.hpp. > > > > Other than in vm_version_.hpp files, always #include > > vm_version.hpp. > > > > From matthias.baesken at sap.com Fri Nov 15 15:39:35 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 15 Nov 2019 15:39:35 +0000 Subject: RFR [XS]: 8233219: NMT output on AIX misses some categories Message-ID: Hello, please review this small fix . I noticed that since ~ end or March 2019, some sections especially the "Java Heap" category were missing from NMT output on AIX . Also the category "Test" ( Test type for verifying NMT ) that is tested by a few NMT related tests is missing , which caused most of the runtime/NMT tests to fail . The following small fix brings the Java Heap categogy back and fixes also the runtime/NMT tests on AIX . Thanks to Zhengyu for pointing me into the right direction when looking into the issue . Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8233219 http://cr.openjdk.java.net/~mbaesken/webrevs/8233219.0/ Thanks, Matthias From thomas.stuefe at gmail.com Fri Nov 15 15:51:48 2019 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 15 Nov 2019 16:51:48 +0100 Subject: RFR [XS]: 8233219: NMT output on AIX misses some categories In-Reply-To: References: Message-ID: Looks good. Thanks for fixing this. ...Thomas On Fri 15. Nov 2019 at 16:40, Baesken, Matthias wrote: > Hello, please review this small fix . > > I noticed that since ~ end or March 2019, some sections especially the > "Java Heap" category were missing from NMT output on AIX . > > Also the category "Test" ( Test type for verifying NMT ) that is tested by > a few NMT related tests is missing , which caused most of the > runtime/NMT tests to fail . > The following small fix brings the Java Heap categogy back and fixes > also the runtime/NMT tests on AIX . > > Thanks to Zhengyu for pointing me into the right direction when looking > into the issue . > > > Bug/webrev : > > https://bugs.openjdk.java.net/browse/JDK-8233219 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8233219.0/ > > Thanks, Matthias > From zgu at redhat.com Fri Nov 15 16:02:13 2019 From: zgu at redhat.com (Zhengyu Gu) Date: Fri, 15 Nov 2019 11:02:13 -0500 Subject: RFR [XS]: 8233219: NMT output on AIX misses some categories In-Reply-To: References: Message-ID: <38275573-a23e-524f-149e-b4bd33ea2871@redhat.com> Looks good to me. Thanks, -Zhengyu On 11/15/19 10:39 AM, Baesken, Matthias wrote: > Hello, please review this small? fix . > > I noticed that since ~ end or March 2019,?? some? sections? especially > the "Java Heap" category? were missing from NMT output? on AIX . > > Also the category "Test" ( Test type for verifying NMT ) that is tested > by a few NMT related tests is missing ,? which? caused most of? the > runtime/NMT tests to fail . > > The following small fix brings ?the Java Heap? categogy back and fixes > also? the? runtime/NMT tests on AIX . > > Thanks to? Zhengyu ??for pointing me into the right direction when > looking into the issue . > > Bug/webrev : > > https://bugs.openjdk.java.net/browse/JDK-8233219 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8233219.0/ > > Thanks, Matthias > From erik.joelsson at oracle.com Fri Nov 15 16:31:57 2019 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 15 Nov 2019 08:31:57 -0800 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> Message-ID: <47239af1-1d8a-e313-bda1-c62785fe9058@oracle.com> Build change looks ok. /Erik On 2019-11-15 07:08, Daniel D. Daugherty wrote: > Adding build-dev at ... since this changeset now touches > make/hotspot/lib/CompileJvm.gmk. The Build team has a standing > request to be included on reviews that touch makefiles... > > Dan > > > On 11/14/19 11:26 AM, Schmidt, Lutz wrote: >> Hi Kim, >> >> that wasn't straightforward. Had to adapt >> make/hotspot/lib/CompileJvm.gmk. Build settings like >> HOTSPOT_VERSION_STRING have to flow into the compile step of >> abstract_vm_version.cpp now. For the details, see my comments below. >> >> Other than that, I hope the new webrev is even closer to your dreams: >> ??? http://cr.openjdk.java.net/~lucy/webrevs/8233787.02/ >> >> Thanks, >> Lutz >> >> >> ?On 14.11.19, 00:34, "Kim Barrett" wrote: >> >> ???? > On Nov 13, 2019, at 11:42 AM, Schmidt, Lutz >> wrote: >> ???? > >> ???? > Hi Kim, >> ???? > >> ???? > there is a new webrev at >> http://cr.openjdk.java.net/~lucy/webrevs/8233787.01/ >> ???? > >> ???? > It should be pretty close to what you view as the "right >> approach". There weren't too many changes relative to 8233787.00. >> Most files already had #include runtime/vm_version.hpp. >> ???? ???? This looks much better to me, but many (most?) of the changed >> ???? #includes need to be moved into sort order. >> >> R: tried my best to fix the sort order. Sorry for not paying >> attention in the first place. >> ------------------------------------------------------------------------------ >> ???? src/hotspot/share/runtime/vm_version.cpp >> ???? ???? Abstract_VM_Version definitions should be moved to >> abstract_vm_version.cpp. >> ???? Maybe just rename the file; I think the only thing that would be >> left >> ???? for vm_version.cpp would be VM_Version_init().? But maybe that >> should >> ???? be left behind in vm_version.cpp?? Though that makes the review >> messier. >> >> R: Everything moved as you suggested. Doesn't make sense to have >> Abstract_VM_Version:: methods in vm_version.cpp file. >> ------------------------------------------------------------------------------ >> ???? src/hotspot/share/runtime/abstract_vm_version.hpp >> ???? ???? Should #include globalDefinitions.hpp. >> ???? - uint64_t features() >> ???? - #define SUPPORTS_NATIVE_CX8 >> >> R: Done. >> ???? ???? Should forward-declare class outsputStream. >> ???? - print_virtualization_info >> ???? - print_matching_lines_from_file (I wonder why this is *here*, >> but not your problem) >> >> R: Done. >> ------------------------------------------------------------------------------ >> > From sgehwolf at redhat.com Fri Nov 15 16:56:45 2019 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Fri, 15 Nov 2019 17:56:45 +0100 Subject: [PING!] RFR: 8230305: Cgroups v2: Container awareness In-Reply-To: References: <072f66ee8c44034831b4e38f6470da4bff6edd07.camel@redhat.com> <7540a208e306ab957032b18178a53c6afa105d33.camel@redhat.com> Message-ID: <5eec97c04d86562346243c1db3832e86e13697a1.camel@redhat.com> On Fri, 2019-11-08 at 15:21 +0100, Severin Gehwolf wrote: > Hi Bob, > > On Wed, 2019-11-06 at 10:47 +0100, Severin Gehwolf wrote: > > On Tue, 2019-11-05 at 16:54 -0500, Bob Vandette wrote: > > > Severin, > > > > > > Thanks for taking on this cgroup v2 improvement. > > > > > > In general I like the implementation and the refactoring. The CachedMetric class is nice. > > > We can add any metric we want to cache in a more general way. > > > > > > Is this the latest version of the webrev? > > > > > > http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/03/webrev/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp.html > > > > > > It looks like you need to add the caching support for active_processor_count (JDK-8227006). > [...] > > I'll do a proper rebase ASAP. > > Latest webrev: > http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/05/webrev/ > > > > I?m not sure it?s worth providing different strings for Unlimited versus Max or Scaled shares. > > > I?d just try to be compatible with the cgroupv2 output so you don?t have to change the test. > > > > OK. Will do. > > Unfortunately, there is no way of NOT changing TestCPUAwareness.java as > it expects CPU Shares to be written to the cgroup filesystem verbatim. > That's no longer the case for cgroups v2 (at least for crun). Either > way, most test changes are gone now. > > > > I wonder if it?s worth trying to synthesize memory_max_usage_in_bytes() by keeping the highest > > > value ever returned by the API. > > > > Interesting idea. I'll ponder this a bit and get back to you. > > This has been implemented. I'm not sure this is correct, though. It > merely piggy-backs on calls to memory_usage_in_bytes() and keeps the > high watermark value of that. > > Testing passed on F31 with cgroups v2 controllers properly configured > (podman) and hybrid (legacy hierarchy) with docker/podman. > > Thoughts? Ping? Metrics work proposed for RFR here: http://mail.openjdk.java.net/pipermail/core-libs-dev/2019-November/063464.html Thanks, Severin From brent.christian at oracle.com Fri Nov 15 18:38:20 2019 From: brent.christian at oracle.com (Brent Christian) Date: Fri, 15 Nov 2019 10:38:20 -0800 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: <1e4cd61f-dddd-b221-f0b4-448e64a4b440@oracle.com> References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> <99dded93-17d2-ea40-deef-56efd10218c7@oracle.com> <1e4cd61f-dddd-b221-f0b4-448e64a4b440@oracle.com> Message-ID: <38a77634-e3d7-426a-ae2d-c957804bf8f3@oracle.com> On 11/14/19 4:46 PM, Mandy Chung wrote: > On 11/14/19 4:42 PM, David Holmes wrote: >> >> If you really want to test both positive and negative cases from a >> clean slate then I would suggest modifying the test slightly and using >> two @run commands - one to try to initialize and one to not. > > Yes this is what I was thinking.? Two separate @run commands with an > argument to indicate if initialize is true or false would do it. > That sounds good. Test updated here: http://cr.openjdk.java.net/~bchristi/8233272/webrev-04/ Thanks, -Brent From lutz.schmidt at sap.com Fri Nov 15 21:50:04 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Fri, 15 Nov 2019 21:50:04 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: <47239af1-1d8a-e313-bda1-c62785fe9058@oracle.com> References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> <47239af1-1d8a-e313-bda1-c62785fe9058@oracle.com> Message-ID: Thank you, Erik, for confirming the build change. And sorry for not including you immediately. Regards, Lutz P.S.: And thanks, Daniel, for including the build team. ?On 15.11.19, 17:31, "Erik Joelsson" wrote: Build change looks ok. /Erik On 2019-11-15 07:08, Daniel D. Daugherty wrote: > Adding build-dev at ... since this changeset now touches > make/hotspot/lib/CompileJvm.gmk. The Build team has a standing > request to be included on reviews that touch makefiles... > > Dan > > > On 11/14/19 11:26 AM, Schmidt, Lutz wrote: >> Hi Kim, >> >> that wasn't straightforward. Had to adapt >> make/hotspot/lib/CompileJvm.gmk. Build settings like >> HOTSPOT_VERSION_STRING have to flow into the compile step of >> abstract_vm_version.cpp now. For the details, see my comments below. >> >> Other than that, I hope the new webrev is even closer to your dreams: >> http://cr.openjdk.java.net/~lucy/webrevs/8233787.02/ >> >> Thanks, >> Lutz >> >> >> On 14.11.19, 00:34, "Kim Barrett" wrote: >> >> > On Nov 13, 2019, at 11:42 AM, Schmidt, Lutz >> wrote: >> > >> > Hi Kim, >> > >> > there is a new webrev at >> http://cr.openjdk.java.net/~lucy/webrevs/8233787.01/ >> > >> > It should be pretty close to what you view as the "right >> approach". There weren't too many changes relative to 8233787.00. >> Most files already had #include runtime/vm_version.hpp. >> This looks much better to me, but many (most?) of the changed >> #includes need to be moved into sort order. >> >> R: tried my best to fix the sort order. Sorry for not paying >> attention in the first place. >> ------------------------------------------------------------------------------ >> src/hotspot/share/runtime/vm_version.cpp >> Abstract_VM_Version definitions should be moved to >> abstract_vm_version.cpp. >> Maybe just rename the file; I think the only thing that would be >> left >> for vm_version.cpp would be VM_Version_init(). But maybe that >> should >> be left behind in vm_version.cpp? Though that makes the review >> messier. >> >> R: Everything moved as you suggested. Doesn't make sense to have >> Abstract_VM_Version:: methods in vm_version.cpp file. >> ------------------------------------------------------------------------------ >> src/hotspot/share/runtime/abstract_vm_version.hpp >> Should #include globalDefinitions.hpp. >> - uint64_t features() >> - #define SUPPORTS_NATIVE_CX8 >> >> R: Done. >> Should forward-declare class outsputStream. >> - print_virtualization_info >> - print_matching_lines_from_file (I wonder why this is *here*, >> but not your problem) >> >> R: Done. >> ------------------------------------------------------------------------------ >> > From lutz.schmidt at sap.com Fri Nov 15 22:01:42 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Fri, 15 Nov 2019 22:01:42 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> Message-ID: Hi Coleen, I feel flattered. I could actually imagine diving into this. BUT: there are some other open work items I have promised to push forward. A few of them are not even started yet. As it is my intention to keep promises, I can't give another promise at this time. And there are those other tasks my company pays me money for... Lutz ?On 15.11.19, 16:14, "hotspot-dev on behalf of coleen.phillimore at oracle.com" wrote: On 11/11/19 11:56 AM, Schmidt, Lutz wrote: > Oh, oh, > looks like I stepped into a beehive... Found JDK-8202579 and JDK-8145956 talking about the unwanted use of Abstract_VM_Version. > > My intended change would not tackle that "mess", as you call it. That's too bad. Are you sure you don't want to tackle it? Please ... ? Coleen > But it would make potential future cleanups a little bit easier by ensuring all of hotspot code only includes vm_version.hpp. I'm in the process of modifying my initial change to reflect Kim's suggestions. I'll send it out Tuesday (hopefully), Wednesday the latest. > > Regards, > Lutz > > On 11.11.19, 11:54, "David Holmes" wrote: > > Also note we have an open RFE to try and fix the VM_Version vs > Abstract_VM_version mess. But it's such a mess it keeps getting deferred. > > David > > On 9/11/2019 11:58 am, Kim Barrett wrote: > >> On Nov 7, 2019, at 10:59 AM, Schmidt, Lutz wrote: > >> > >> Dear all, > >> > >> may I please request reviews for this cleanup? It's a lot of files with just some #include statement changes. That makes the review process tedious and not very challenging intellectually. > >> > >> Anyway, your effort is very much appreciated! > >> > >> jdk/submit results pending. > >> > >> Bug: https://bugs.openjdk.java.net/browse/JDK-8233787 > >> Webrev: http://cr.openjdk.java.net/~lucy/webrevs/8233787.00/ > >> > >> Thank you! > >> Lutz > > > > I don't think this is the right approach. It makes all the > > vm_version_.hpp files not be stand alone, which I think is not a > > good idea. > > > > I thik the real problem is that Abstract_VM_Version is declared in > > vm_version.hpp. I think that file should be split into > > abstract_vm_version.hpp (with most of what's currently in > > vm_version.hpp), with vm_version.hpp being just (untested) > > > > > > #ifndef SHARE_RUNTIME_VM_VERSION_HPP > > #define SHARE_RUNTIME_VM_VERSION_HPP > > > > #include "utilities/macros.hpp" > > #include CPU_HEADER(vm_version) > > > > #endif // SHARE_RUNTIME_VM_VERSION_HPP > > > > > > Change all the vm_version_.hpp files #include > > abstract_vm_version.hpp rather than vm_version.hpp. > > > > Other than in vm_version_.hpp files, always #include > > vm_version.hpp. > > > > From kim.barrett at oracle.com Fri Nov 15 23:00:00 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 15 Nov 2019 18:00:00 -0500 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> Message-ID: <8130C4B4-6BCD-48C6-B26F-35FAE070D915@oracle.com> > On Nov 15, 2019, at 7:19 AM, Schmidt, Lutz wrote: > > Hi Kim, > > thanks for reviewing - I understand your comments that way. One more review to go. :-) > > I made abstract_vm_version.cpp #include vm_version.hpp, and I updated the copyrights. See the webrev#03: > http://cr.openjdk.java.net/~lucy/webrevs/8233787.03/ > > I ran the initial webrev iteration through dev-submit and had it active SAP-internally. The current webrev is active since last night SAP-internally. All builds are green. The test show only unrelated issues (some JIT compiler asserts). Of course I will run the final webrev through dev-submit. > > Re test coverage: we do not cover 32-bit platforms. And we do not have zero or minimal builds. Looks good. From mandy.chung at oracle.com Fri Nov 15 23:16:51 2019 From: mandy.chung at oracle.com (Mandy Chung) Date: Fri, 15 Nov 2019 15:16:51 -0800 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: <38a77634-e3d7-426a-ae2d-c957804bf8f3@oracle.com> References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> <99dded93-17d2-ea40-deef-56efd10218c7@oracle.com> <1e4cd61f-dddd-b221-f0b4-448e64a4b440@oracle.com> <38a77634-e3d7-426a-ae2d-c957804bf8f3@oracle.com> Message-ID: On 11/15/19 10:38 AM, Brent Christian wrote: > That sounds good.? Test updated here: > http://cr.openjdk.java.net/~bchristi/8233272/webrev-04/ Looks good.? Minor: an additional check to consider is to check if NCDFE's cause whose message contains? "MissingClass" just to be sure.?? No new webrev needed. Mandy From lutz.schmidt at sap.com Fri Nov 15 23:51:20 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Fri, 15 Nov 2019 23:51:20 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: <8130C4B4-6BCD-48C6-B26F-35FAE070D915@oracle.com> References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> <8130C4B4-6BCD-48C6-B26F-35FAE070D915@oracle.com> Message-ID: Thanks, Kim, for all the useful hints and for the Review. Regards, Lutz ?On 16.11.19, 00:00, "Kim Barrett" wrote: > On Nov 15, 2019, at 7:19 AM, Schmidt, Lutz wrote: > > Hi Kim, > > thanks for reviewing - I understand your comments that way. One more review to go. :-) > > I made abstract_vm_version.cpp #include vm_version.hpp, and I updated the copyrights. See the webrev#03: > http://cr.openjdk.java.net/~lucy/webrevs/8233787.03/ > > I ran the initial webrev iteration through dev-submit and had it active SAP-internally. The current webrev is active since last night SAP-internally. All builds are green. The test show only unrelated issues (some JIT compiler asserts). Of course I will run the final webrev through dev-submit. > > Re test coverage: we do not cover 32-bit platforms. And we do not have zero or minimal builds. Looks good. From david.holmes at oracle.com Mon Nov 18 03:31:03 2019 From: david.holmes at oracle.com (David Holmes) Date: Mon, 18 Nov 2019 13:31:03 +1000 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: <38a77634-e3d7-426a-ae2d-c957804bf8f3@oracle.com> References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> <99dded93-17d2-ea40-deef-56efd10218c7@oracle.com> <1e4cd61f-dddd-b221-f0b4-448e64a4b440@oracle.com> <38a77634-e3d7-426a-ae2d-c957804bf8f3@oracle.com> Message-ID: On 16/11/2019 4:38 am, Brent Christian wrote: > On 11/14/19 4:46 PM, Mandy Chung wrote: >> On 11/14/19 4:42 PM, David Holmes wrote: >>> >>> If you really want to test both positive and negative cases from a >>> clean slate then I would suggest modifying the test slightly and >>> using two @run commands - one to try to initialize and one to not. >> >> Yes this is what I was thinking.? Two separate @run commands with an >> argument to indicate if initialize is true or false would do it. >> > > That sounds good.? Test updated here: > http://cr.openjdk.java.net/~bchristi/8233272/webrev-04/ Minor optimisations: 35 * @compile MissingClass.java 36 * @compile Container.java 37 * 38 * @run main/othervm ClassFileInstaller -jar classes.jar Container Container$1 You can use a single @compile line to compile both classes. You can use "@run driver" for ClassfileInstaller. No need to see updated webrev. Thanks, David > Thanks, > -Brent From augustnagro at gmail.com Mon Nov 18 04:02:22 2019 From: augustnagro at gmail.com (August Nagro) Date: Sun, 17 Nov 2019 22:02:22 -0600 Subject: Bounds Check Elimination with Fast-Range Message-ID: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> Hi! The fast-range[1] algorithm is used to map well-distributed hash functions to a range of size N. It is ~4x faster than using integer modulo, and does not require the table to be a power of two. It is used by libraries like Tensorflow and the StockFish chess engine. The idea is that, given (int) hash h and (int) size N, then ((long) h) * N) >>> 32 is a good mapping. However, will the compiler be able to eliminate array range-checking? HashMap?s approach using power-of-two xor/mask was patched here: https://bugs.openjdk.java.net/browse/JDK-8003585. Sincerely, - August [1]: https://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/ From martin.doerr at sap.com Mon Nov 18 12:02:21 2019 From: martin.doerr at sap.com (Doerr, Martin) Date: Mon, 18 Nov 2019 12:02:21 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: <8130C4B4-6BCD-48C6-B26F-35FAE070D915@oracle.com> References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> <8130C4B4-6BCD-48C6-B26F-35FAE070D915@oracle.com> Message-ID: Hi Lutz, I've looked over the complete webrev .03 and it looks good to me. I appreciate having the abstract version in separate files and the regular vm_version basically include the platform stuff. Best regards, Martin > -----Original Message----- > From: hotspot-dev On Behalf Of > Kim Barrett > Sent: Samstag, 16. November 2019 00:00 > To: Schmidt, Lutz > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR(M): 8233787: Break cycle in vm_version* includes > > > On Nov 15, 2019, at 7:19 AM, Schmidt, Lutz wrote: > > > > Hi Kim, > > > > thanks for reviewing - I understand your comments that way. One more > review to go. :-) > > > > I made abstract_vm_version.cpp #include vm_version.hpp, and I updated > the copyrights. See the webrev#03: > > http://cr.openjdk.java.net/~lucy/webrevs/8233787.03/ > > > > I ran the initial webrev iteration through dev-submit and had it active SAP- > internally. The current webrev is active since last night SAP-internally. All > builds are green. The test show only unrelated issues (some JIT compiler > asserts). Of course I will run the final webrev through dev-submit. > > > > Re test coverage: we do not cover 32-bit platforms. And we do not have > zero or minimal builds. > > Looks good. From lutz.schmidt at sap.com Mon Nov 18 12:05:10 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Mon, 18 Nov 2019 12:05:10 +0000 Subject: RFR(M): 8233787: Break cycle in vm_version* includes In-Reply-To: References: <52D7C0FA-BDAE-43D6-9607-2F4FAA58524A@sap.com> <9BDEBE4D-DA1F-4087-937A-56ADA1E1F7A5@oracle.com> <8130C4B4-6BCD-48C6-B26F-35FAE070D915@oracle.com> Message-ID: <19482F90-6B0C-4804-B37E-C42B54D0243E@sap.com> Hi Martin, thank you for going through all these "simple" modifications. And thanks for the review! I'll send the stuff through jdk/submit and then push it. Regards, Lutz ?On 18.11.19, 13:02, "Doerr, Martin" wrote: Hi Lutz, I've looked over the complete webrev .03 and it looks good to me. I appreciate having the abstract version in separate files and the regular vm_version basically include the platform stuff. Best regards, Martin > -----Original Message----- > From: hotspot-dev On Behalf Of > Kim Barrett > Sent: Samstag, 16. November 2019 00:00 > To: Schmidt, Lutz > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR(M): 8233787: Break cycle in vm_version* includes > > > On Nov 15, 2019, at 7:19 AM, Schmidt, Lutz wrote: > > > > Hi Kim, > > > > thanks for reviewing - I understand your comments that way. One more > review to go. :-) > > > > I made abstract_vm_version.cpp #include vm_version.hpp, and I updated > the copyrights. See the webrev#03: > > http://cr.openjdk.java.net/~lucy/webrevs/8233787.03/ > > > > I ran the initial webrev iteration through dev-submit and had it active SAP- > internally. The current webrev is active since last night SAP-internally. All > builds are green. The test show only unrelated issues (some JIT compiler > asserts). Of course I will run the final webrev through dev-submit. > > > > Re test coverage: we do not cover 32-bit platforms. And we do not have > zero or minimal builds. > > Looks good. From vladimir.x.ivanov at oracle.com Mon Nov 18 12:34:55 2019 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Mon, 18 Nov 2019 15:34:55 +0300 Subject: Bounds Check Elimination with Fast-Range In-Reply-To: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> References: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> Message-ID: (CCing hotspot-compiler-dev at ...) Thanks for the reference, August. Indeed the proposed approach looks very promising. I don't think range-check elimination in C2 can optimize such code shape (haven't verified it with test cases yet though). Filed an RFE for it: https://bugs.openjdk.java.net/browse/JDK-8234333 It looks like it should be pretty straightforward to cover this particular case. Also, it's worth looking at how Graal handles it and file a separate RFE if it doesn't optimize it well. Best regards, Vladimir Ivanov On 18.11.2019 07:02, August Nagro wrote: > Hi! > > The fast-range[1] algorithm is used to map well-distributed hash functions to a range of size N. It is ~4x faster than using integer modulo, and does not require the table to be a power of two. It is used by libraries like Tensorflow and the StockFish chess engine. > > The idea is that, given (int) hash h and (int) size N, then ((long) h) * N) >>> 32 is a good mapping. > > However, will the compiler be able to eliminate array range-checking? HashMap?s approach using power-of-two xor/mask was patched here: https://bugs.openjdk.java.net/browse/JDK-8003585. > > Sincerely, > > - August > > [1]: https://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/ > From fweimer at redhat.com Mon Nov 18 13:10:27 2019 From: fweimer at redhat.com (Florian Weimer) Date: Mon, 18 Nov 2019 14:10:27 +0100 Subject: Bounds Check Elimination with Fast-Range In-Reply-To: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> (August Nagro's message of "Sun, 17 Nov 2019 22:02:22 -0600") References: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> Message-ID: <87r22549fg.fsf@oldenburg2.str.redhat.com> * August Nagro: > The fast-range[1] algorithm is used to map well-distributed hash > functions to a range of size N. It is ~4x faster than using integer > modulo, and does not require the table to be a power of two. It is > used by libraries like Tensorflow and the StockFish chess engine. > > The idea is that, given (int) hash h and (int) size N, then ((long) h) > * N) >>> 32 is a good mapping. I looked at this in the past weeks in a different context, and I don't think this would work because we have: jshell> Integer.hashCode(0) $1 ==> 0 jshell> Integer.hashCode(1) $2 ==> 1 jshell> Integer.hashCode(2) $3 ==> 2 jshell> "a".hashCode() $4 ==> 97 jshell> "b".hashCode() $5 ==> 98 Under the allegedly good mapping, those all map to bucket zero even for really large arrays, which is not acceptable. The multiplication shortcut only works for hash functions which behave in certain ways. Something FNV-style for strings is probably okay, but most Java hashCode() implementations likely are not. For non-power-of-two bucket counts, one could try to pre-compute the reciprocal as explained in Hacker's Delight and in these posts: (I need to write to the author and have some of the math fixed, but I think the general direction is solid.) For an internal hash table, it is possible to use primes which are convenient for the saturating increment algorithm because the choice of bucket count is an implementation detail to some extent. (It is not in my case, so it would need data-dependent branches, which is kind of counter-productive.) Not discussed on the quoted pages is a generalization which uses hashCode - bucketCount * (int) Long.multiplyHigh(hashCode + 1L, magic) as the bucket number. That works for any table size that is not a power of two, but requires a fast multiplier to get the upper half of a 64x64 product. Thanks, Florian From john.r.rose at oracle.com Mon Nov 18 17:45:38 2019 From: john.r.rose at oracle.com (John Rose) Date: Mon, 18 Nov 2019 09:45:38 -0800 Subject: Bounds Check Elimination with Fast-Range In-Reply-To: <87r22549fg.fsf@oldenburg2.str.redhat.com> References: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> <87r22549fg.fsf@oldenburg2.str.redhat.com> Message-ID: <1AB809E9-27B3-423E-AB1A-448D6B4689F6@oracle.com> On Nov 18, 2019, at 5:10 AM, Florian Weimer wrote: > >> >> The idea is that, given (int) hash h and (int) size N, then ((long) h) >> * N) >>> 32 is a good mapping. > > I looked at this in the past weeks in a different context, and I don't > think this would work because we have: That technique appears to require either a well-conditioned hash code (which is not the case with Integer.hashCode) or else a value of N that performs extra mixing on h. (So a very *non-*power-of-two value of N would be better here, i.e., N with larger popcount.) A little more mixing should help the problem Florian reports with a badly conditioned h. Given this: int fr(int h) { return (int)(((long)h * N) >>> 32); } int h = x.hashCode(); //int bucket = fr(h); // weak if h is badly conditioned then, assuming multiplication is cheap: int bucket = fr(h * M); // M = 0x2357BD or something or maybe something fast and sloppy like: int bucket = fr(h + (h << 8)); or even: int bucket = fr(h) ^ (h & (N-1)); From augustnagro at gmail.com Mon Nov 18 19:26:30 2019 From: augustnagro at gmail.com (August Nagro) Date: Mon, 18 Nov 2019 13:26:30 -0600 Subject: Bounds Check Elimination with Fast-Range In-Reply-To: <1AB809E9-27B3-423E-AB1A-448D6B4689F6@oracle.com> References: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> <87r22549fg.fsf@oldenburg2.str.redhat.com> <1AB809E9-27B3-423E-AB1A-448D6B4689F6@oracle.com> Message-ID: Yes, exactly. Once can also use Fibonacci hashing to ensure that an arbitrary Key.hashCode() is well distributed. See for instance my implementation of Universal Hashing.. https://gist.github.com/AugustNagro/4f2d70d261347e515efe0f87de9e8dc2 On Mon, Nov 18, 2019 at 11:45 AM John Rose wrote: > On Nov 18, 2019, at 5:10 AM, Florian Weimer wrote: > > > > The idea is that, given (int) hash h and (int) size N, then ((long) h) > * N) >>> 32 is a good mapping. > > > I looked at this in the past weeks in a different context, and I don't > think this would work because we have: > > > That technique appears to require either a well-conditioned hash code > (which is not the case with Integer.hashCode) or else a value of N that > performs extra mixing on h. (So a very *non-*power-of-two value of N > would be better here, i.e., N with larger popcount.) > > A little more mixing should help the problem Florian reports with a > badly conditioned h. Given this: > > int fr(int h) { return (int)(((long)h * N) >>> 32); } > int h = x.hashCode(); > //int bucket = fr(h); // weak if h is badly conditioned > > then, assuming multiplication is cheap: > > int bucket = fr(h * M); // M = 0x2357BD or something > > or maybe something fast and sloppy like: > > int bucket = fr(h + (h << 8)); > > or even: > > int bucket = fr(h) ^ (h & (N-1)); > > From fweimer at redhat.com Mon Nov 18 20:17:05 2019 From: fweimer at redhat.com (Florian Weimer) Date: Mon, 18 Nov 2019 21:17:05 +0100 Subject: Bounds Check Elimination with Fast-Range In-Reply-To: <1AB809E9-27B3-423E-AB1A-448D6B4689F6@oracle.com> (John Rose's message of "Mon, 18 Nov 2019 09:45:38 -0800") References: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> <87r22549fg.fsf@oldenburg2.str.redhat.com> <1AB809E9-27B3-423E-AB1A-448D6B4689F6@oracle.com> Message-ID: <877e3x0wji.fsf@oldenburg2.str.redhat.com> * John Rose: > On Nov 18, 2019, at 5:10 AM, Florian Weimer wrote: >> >>> >>> The idea is that, given (int) hash h and (int) size N, then ((long) h) >>> * N) >>> 32 is a good mapping. >> >> I looked at this in the past weeks in a different context, and I don't >> think this would work because we have: > > That technique appears to require either a well-conditioned hash code > (which is not the case with Integer.hashCode) or else a value of N that > performs extra mixing on h. (So a very *non-*power-of-two value of N > would be better here, i.e., N with larger popcount.) > > A little more mixing should help the problem Florian reports with a > badly conditioned h. Given this: > > int fr(int h) { return (int)(((long)h * N) >>> 32); } > int h = x.hashCode(); > //int bucket = fr(h); // weak if h is badly conditioned > > then, assuming multiplication is cheap: (Back-to-back multiplications probably are not.) > int bucket = fr(h * M); // M = 0x2357BD or something > > or maybe something fast and sloppy like: > > int bucket = fr(h + (h << 8)); > > or even: > > int bucket = fr(h) ^ (h & (N-1)); Does this really work? I don't think so. I think this kind of perturbation is quite expensive. Arm's BITR should be helpful here. But even though this operation is commonly needed and easily implemented in hardware, it's rarely found in CPUs. Any scheme with another multiplication is probably not an improvement over the multiply-shift-multiply-subtract sequence to implement modulo for certain convenient bucket counts, and for that, we can look up extensive analysis. 8-) Thanks, Florian From augustnagro at gmail.com Tue Nov 19 04:43:23 2019 From: augustnagro at gmail.com (August Nagro) Date: Mon, 18 Nov 2019 22:43:23 -0600 Subject: Bounds Check Elimination with Fast-Range In-Reply-To: <877e3x0wji.fsf@oldenburg2.str.redhat.com> References: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> <87r22549fg.fsf@oldenburg2.str.redhat.com> <1AB809E9-27B3-423E-AB1A-448D6B4689F6@oracle.com> <877e3x0wji.fsf@oldenburg2.str.redhat.com> Message-ID: <2BAEB1D3-46B7-4BA9-81A3-4F5E7B47B82A@gmail.com> Apologies; there were actually a few errors in my Universal Hashing javadoc (not the code); they?ve been corrected: https://gist.github.com/AugustNagro/4f2d70d261347e515efe0f87de9e8dc2 One thing that might be relevant to the bounds check elimination is that in Java fast-range will output in the range of [-tableSize / 2, tableSize / 2 - 1]. So then we need table[fr(hash) + tableSize/2]. However, tableSize / 2 will be a constant, so that division need only be done once. Regarding Florian?s concerns: yes it is right that fast-range isn?t optimal in every case (and I never tried to claim that). If your tableSize is a power of 2, then just use xor/mask ala HashMap. But the benefit is when mapping to tables of arbitrary size, where those modulo intrinsics may not apply. And here?s a tangent to thing to think about: is growing HashMap?s backing array by powers of 2 actually a good thing, when the HashMap gets large? What if you instead wanted to grow by powers of 1.5, or even grow probabilistically, based on the collision rate, allocation pressure, or other data? With fast-range you can do this if you want. And without the performance hit of %! > On Nov 18, 2019, at 2:17 PM, Florian Weimer wrote: > > * John Rose: > >> On Nov 18, 2019, at 5:10 AM, Florian Weimer wrote: >>> >>>> >>>> The idea is that, given (int) hash h and (int) size N, then ((long) h) >>>> * N) >>> 32 is a good mapping. >>> >>> I looked at this in the past weeks in a different context, and I don't >>> think this would work because we have: >> >> That technique appears to require either a well-conditioned hash code >> (which is not the case with Integer.hashCode) or else a value of N that >> performs extra mixing on h. (So a very *non-*power-of-two value of N >> would be better here, i.e., N with larger popcount.) >> >> A little more mixing should help the problem Florian reports with a >> badly conditioned h. Given this: >> >> int fr(int h) { return (int)(((long)h * N) >>> 32); } >> int h = x.hashCode(); >> //int bucket = fr(h); // weak if h is badly conditioned >> >> then, assuming multiplication is cheap: > > (Back-to-back multiplications probably are not.) > >> int bucket = fr(h * M); // M = 0x2357BD or something >> >> or maybe something fast and sloppy like: >> >> int bucket = fr(h + (h << 8)); >> >> or even: >> >> int bucket = fr(h) ^ (h & (N-1)); > > Does this really work? I don't think so. > > I think this kind of perturbation is quite expensive. Arm's BITR should > be helpful here. But even though this operation is commonly needed and > easily implemented in hardware, it's rarely found in CPUs. > > Any scheme with another multiplication is probably not an improvement > over the multiply-shift-multiply-subtract sequence to implement modulo > for certain convenient bucket counts, and for that, we can look up > extensive analysis. 8-) > > Thanks, > Florian From augustnagro at gmail.com Tue Nov 19 15:09:28 2019 From: augustnagro at gmail.com (August Nagro) Date: Tue, 19 Nov 2019 09:09:28 -0600 Subject: Bounds Check Elimination with Fast-Range In-Reply-To: <2BAEB1D3-46B7-4BA9-81A3-4F5E7B47B82A@gmail.com> References: <19544187-6670-4A65-AF47-297F38E8D555@gmail.com> <87r22549fg.fsf@oldenburg2.str.redhat.com> <1AB809E9-27B3-423E-AB1A-448D6B4689F6@oracle.com> <877e3x0wji.fsf@oldenburg2.str.redhat.com> <2BAEB1D3-46B7-4BA9-81A3-4F5E7B47B82A@gmail.com> Message-ID: <26D46AC7-E39C-4853-AE4A-0E78B8AA03B6@gmail.com> Never mind my comment on fast-range being in [-tableSize / 2, tableSize / 2 -1]? this is because I had forgot to do an unsigned cast ????? > On Nov 18, 2019, at 10:43 PM, August Nagro wrote: > > Apologies; there were actually a few errors in my Universal Hashing javadoc (not the code); they?ve been corrected: https://gist.github.com/AugustNagro/4f2d70d261347e515efe0f87de9e8dc2 > > One thing that might be relevant to the bounds check elimination is that in Java fast-range will output in the range of [-tableSize / 2, tableSize / 2 - 1]. So then we need table[fr(hash) + tableSize/2]. However, tableSize / 2 will be a constant, so that division need only be done once. > > Regarding Florian?s concerns: yes it is right that fast-range isn?t optimal in every case (and I never tried to claim that). If your tableSize is a power of 2, then just use xor/mask ala HashMap. But the benefit is when mapping to tables of arbitrary size, where those modulo intrinsics may not apply. > > And here?s a tangent to thing to think about: is growing HashMap?s backing array by powers of 2 actually a good thing, when the HashMap gets large? What if you instead wanted to grow by powers of 1.5, or even grow probabilistically, based on the collision rate, allocation pressure, or other data? With fast-range you can do this if you want. And without the performance hit of %! > >> On Nov 18, 2019, at 2:17 PM, Florian Weimer wrote: >> >> * John Rose: >> >>> On Nov 18, 2019, at 5:10 AM, Florian Weimer wrote: >>>> >>>>> >>>>> The idea is that, given (int) hash h and (int) size N, then ((long) h) >>>>> * N) >>> 32 is a good mapping. >>>> >>>> I looked at this in the past weeks in a different context, and I don't >>>> think this would work because we have: >>> >>> That technique appears to require either a well-conditioned hash code >>> (which is not the case with Integer.hashCode) or else a value of N that >>> performs extra mixing on h. (So a very *non-*power-of-two value of N >>> would be better here, i.e., N with larger popcount.) >>> >>> A little more mixing should help the problem Florian reports with a >>> badly conditioned h. Given this: >>> >>> int fr(int h) { return (int)(((long)h * N) >>> 32); } >>> int h = x.hashCode(); >>> //int bucket = fr(h); // weak if h is badly conditioned >>> >>> then, assuming multiplication is cheap: >> >> (Back-to-back multiplications probably are not.) >> >>> int bucket = fr(h * M); // M = 0x2357BD or something >>> >>> or maybe something fast and sloppy like: >>> >>> int bucket = fr(h + (h << 8)); >>> >>> or even: >>> >>> int bucket = fr(h) ^ (h & (N-1)); >> >> Does this really work? I don't think so. >> >> I think this kind of perturbation is quite expensive. Arm's BITR should >> be helpful here. But even though this operation is commonly needed and >> easily implemented in hardware, it's rarely found in CPUs. >> >> Any scheme with another multiplication is probably not an improvement >> over the multiply-shift-multiply-subtract sequence to implement modulo >> for certain convenient bucket counts, and for that, we can look up >> extensive analysis. 8-) >> >> Thanks, >> Florian > From brent.christian at oracle.com Tue Nov 19 18:06:11 2019 From: brent.christian at oracle.com (Brent Christian) Date: Tue, 19 Nov 2019 10:06:11 -0800 Subject: RFR 8233272 : The Class.forName specification should be updated to match the long-standing implementation with respect to class linking In-Reply-To: References: <79528f01-3418-eb51-4dbd-897e4e97f853@oracle.com> <7ddee581-c68b-bf40-22db-871205b8ca1b@oracle.com> <3d305a6d-fd45-f6e5-93f2-c47250f31985@oracle.com> <99dded93-17d2-ea40-deef-56efd10218c7@oracle.com> <1e4cd61f-dddd-b221-f0b4-448e64a4b440@oracle.com> <38a77634-e3d7-426a-ae2d-c957804bf8f3@oracle.com> Message-ID: <6a73ae42-2083-0346-6afb-6db73a36612e@oracle.com> Thank you for the suggestions, Mandy and David. I've pushed the change. -Brent From matthias.baesken at sap.com Thu Nov 21 09:24:59 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Thu, 21 Nov 2019 09:24:59 +0000 Subject: RFR: 8234397: add OS uptime information to os::print_os_info output Message-ID: Hello, please review this small addition to os::print_os_info . Currently os::print_os_info outputs various interesting OS information, The output is platforms dependent, on Linux currently the following information is printed : distro, uname , some important libversions, some limits, load average, memory info, info about /proc/sys , container and virtualization details and steal ticks. The OS uptime would be a helpful addition. Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8234397 http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.1/ Thanks, Matthias From stefan.karlsson at oracle.com Thu Nov 21 10:33:28 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 21 Nov 2019 11:33:28 +0100 Subject: RFR: 8234562: Move OrderAccess::release_store*/load_acquire to Atomic Message-ID: <89e4bbcd-a7de-2e5b-14cf-73699d984a69@oracle.com> Hi all, I'd like to propose that we move release_store, release_store_fence, and load_acquire, from OrderAccess to Atomic. https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8234562 Atomic already has the relaxed store and load, among other functions for concurrent access, but release_store, release_store_fence, and load_acquire, are located in OrderAccess. After this change there's an inconsistency in the order of the parameters in the store functions in the Atomic API: void store(T store_value, volatile D* dest) void release_store(volatile D* dest, T store_value) void release_store_fence(volatile D* dest, T store_value) I'd like to address that in a separate RFE, where I propose that we move the dest parameter to the left for all the Atomic functions. See: https://bugs.openjdk.java.net/browse/JDK-8234563 I've tested this patch with tier1-7. I've also built fastdebug on the following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, minimal, just to minimize any disruptions. Thanks, StefanK From david.holmes at oracle.com Thu Nov 21 10:46:15 2019 From: david.holmes at oracle.com (David Holmes) Date: Thu, 21 Nov 2019 20:46:15 +1000 Subject: RFR: 8234397: add OS uptime information to os::print_os_info output In-Reply-To: References: Message-ID: Hi Matthias, On 21/11/2019 7:24 pm, Baesken, Matthias wrote: > Hello, please review this small addition to os::print_os_info . > > Currently os::print_os_info outputs various interesting OS information, > The output is platforms dependent, on Linux currently the following information is printed : > distro, uname , some important libversions, some limits, load average, memory info, info about /proc/sys , container and virtualization details and steal ticks. > The OS uptime would be a helpful addition. I'd be interested to hear an example of this. > Bug/webrev : > https://bugs.openjdk.java.net/browse/JDK-8234397 > http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.1/ Can Linux not use the POSIX version? Thanks, David > > Thanks, Matthias > From matthias.baesken at sap.com Thu Nov 21 11:22:23 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Thu, 21 Nov 2019 11:22:23 +0000 Subject: RFR: 8234397: add OS uptime information to os::print_os_info output In-Reply-To: References: Message-ID: Hi David , > > Hi Matthias, > > On 21/11/2019 7:24 pm, Baesken, Matthias wrote: > > Hello, please review this small addition to os::print_os_info . > > > > Currently os::print_os_info outputs various interesting OS information, > > The output is platforms dependent, on Linux currently the following > information is printed : > > distro, uname , some important libversions, some limits, load average, > memory info, info about /proc/sys , container and virtualization details and > steal ticks. > > The OS uptime would be a helpful addition. > > I'd be interested to hear an example of this. > One example that occurred last week - my colleague Christoph and me were browsing through an hs_err file of a crash on AIX . When looking into the hs_err we wanted to know the uptime because our latest fontconfig - patches (for getting rid of the crash) needed a reboot too to really work . Unfortunately we could not find the info , and we were disappointed ( then we noticed the crash is from OpenJDK and not our internal JVM ). > > Bug/webrev : > > https://bugs.openjdk.java.net/browse/JDK-8234397 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.1/ > > Can Linux not use the POSIX version? > Unfortunately the posix code does not give the desired result on Linux (at least on my test machines). Best regards, Matthias From stefan.karlsson at oracle.com Thu Nov 21 11:26:11 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 21 Nov 2019 12:26:11 +0100 Subject: RFR: 8234563: Harmonize parameter order in Atomic Message-ID: <69edd7c6-93ed-24d8-d761-1fbe968b50d4@oracle.com> Hi all, I'd like to propose a restructuring of the parameter order in Atomic. https://bugs.openjdk.java.net/browse/JDK-8234563 These are some of the functions used to concurrently write memory from HotSpot C++ code: Atomic::store(value, destination); OrderAccess::release_store(destination, value) OrderAccess::release_store_fence(destination, value) Atomic::add(value, destination); Atomic::sub(value, destination); Atomic::xchg(exchange_value, destination); Atomic::cmpxchg(exchange_value, destination, compare_value); With the proposed JDK-8234562 change, this would look like: Atomic::store(value, destination); Atomic::release_store(destination, value) Atomic::release_store_fence(destination, value) Atomic::add(value, destination); Atomic::sub(value, destination); Atomic::xchg(exchange_value, destination); Atomic::cmpxchg(exchange_value, destination, compare_value); I'd like to propose that we move the destination parameter over to the left, and the new value to the right. This would look like this: Atomic::store(destination, value); Atomic::release_store(destination, value) Atomic::release_store_fence(destination, value) Atomic::add(destination, value); Atomic::sub(destination, value); Atomic::xchg(destination, exchange_value); Atomic::cmpxchg(destination, compare_value, exchange_value); This would bring the Atomic API more in-line with the order for a normal store: *destination = value; I've split this up into separate patches, each dealing with a separate operation: Atomic::store: https://cr.openjdk.java.net/~stefank/8234563/webrev.01.01.store/ Atomic::add: https://cr.openjdk.java.net/~stefank/8234563/webrev.01.02.add/ Atomic::sub: https://cr.openjdk.java.net/~stefank/8234563/webrev.01.03.sub/ Atomic::xchg: https://cr.openjdk.java.net/~stefank/8234563/webrev.01.04.xchg/ Atomic::cmpxchg: https://cr.openjdk.java.net/~stefank/8234563/webrev.01.05.cmpxchg/ All sub-patches combined: https://cr.openjdk.java.net/~stefank/8234563/webrev.01.all/ The patches applies on-top of the patch for JDK-8234562: https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ I've tested this patch with tier1-7. I've also built fastdebug on the following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, minimal, just to minimize any disruptions. However, this is a moving target and latest rebase of this has only been run on tier1. Thanks, StefanK From jean-philippe at bempel.fr Thu Nov 21 12:19:50 2019 From: jean-philippe at bempel.fr (Jean-Philippe BEMPEL) Date: Thu, 21 Nov 2019 13:19:50 +0100 Subject: PrintAssembly: Passing options to hsdis does not work anymore Message-ID: Hello, I have just found an issue with PrintAssemblyOptions since openjdk 13 (still works with openjdk 12) Before we could change the output of PrintAssembly thanks to PrintAssemblyOptions (for example switching to intel syntax instead of AT&T one) So doing java -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly -XX:PrintAssemblyOptions=intel -version with correct hsdis-*.[dll|so] should output assembly with intel syntax. I have investigated this issue and seems related to the addition of _optionParsed global flag: https://github.com/openjdk/jdk/blob/master/src/hotspot/share/compiler/disassembler.cpp#L417 Removing this line fix the issue. decode_env is created each time a call to Disassembler::decode() is made but parsing options is made only once. _option_buf is initialized each time decode_env is created but options are not parsed for subsequent calls and _option_bug is not filled. Downside of my fix: options are reparsed for each method disassembled, but this is what was done before from what I understood. Thanks Jean-Philippe Bempel From vladimir.x.ivanov at oracle.com Thu Nov 21 12:26:02 2019 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Thu, 21 Nov 2019 15:26:02 +0300 Subject: PrintAssembly: Passing options to hsdis does not work anymore In-Reply-To: References: Message-ID: Thanks for the report, Jean-Philippe. My reading is -XX:PrintAssemblyOptions doesn't have any effect starting 13. Is it what you observe? Best regards, Vladimir Ivanov On 21.11.2019 15:19, Jean-Philippe BEMPEL wrote: > Hello, > > I have just found an issue with PrintAssemblyOptions since openjdk 13 > (still works with openjdk 12) > Before we could change the output of PrintAssembly thanks to > PrintAssemblyOptions > (for example switching to intel syntax instead of AT&T one) > So doing > java -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly > -XX:PrintAssemblyOptions=intel -version > with correct hsdis-*.[dll|so] should output assembly with intel syntax. > I have investigated this issue and seems related to the addition of > _optionParsed global flag: > https://github.com/openjdk/jdk/blob/master/src/hotspot/share/compiler/disassembler.cpp#L417 > > Removing this line fix the issue. > > decode_env is created each time a call to Disassembler::decode() is made > but parsing options is made only once. > _option_buf is initialized each time decode_env is created but options are > not parsed for subsequent calls and _option_bug is not filled. > > Downside of my fix: options are reparsed for each method disassembled, but > this is what was done before from what I understood. > > Thanks > Jean-Philippe Bempel > From vladimir.x.ivanov at oracle.com Thu Nov 21 12:38:35 2019 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Thu, 21 Nov 2019 15:38:35 +0300 Subject: PrintAssembly: Passing options to hsdis does not work anymore In-Reply-To: References: Message-ID: <20a156d0-4190-39ee-5a3d-58c56bc495a2@oracle.com> Anyway, filed JDK-8234583 to track it: https://bugs.openjdk.java.net/browse/JDK-8234583 Best regards, Vladimir Ivanov On 21.11.2019 15:26, Vladimir Ivanov wrote: > Thanks for the report, Jean-Philippe. > > My reading is -XX:PrintAssemblyOptions doesn't have any effect starting > 13. Is it what you observe? > > Best regards, > Vladimir Ivanov > > On 21.11.2019 15:19, Jean-Philippe BEMPEL wrote: >> Hello, >> >> I have just found an issue with PrintAssemblyOptions since openjdk 13 >> (still works with openjdk 12) >> Before we could change the output of PrintAssembly thanks to >> PrintAssemblyOptions >> (for example switching to intel syntax instead of AT&T one) >> So doing >> java -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly >> -XX:PrintAssemblyOptions=intel -version >> with correct hsdis-*.[dll|so] should output assembly with intel syntax. >> I have investigated this issue and seems related to the addition of >> _optionParsed global flag: >> https://github.com/openjdk/jdk/blob/master/src/hotspot/share/compiler/disassembler.cpp#L417 >> >> >> Removing this line fix the issue. >> >> decode_env is created each time a call to Disassembler::decode() is made >> but parsing options is made only once. >> _option_buf is initialized each time decode_env is created but options >> are >> not parsed for subsequent calls and _option_bug is not filled. >> >> Downside of my fix: options are reparsed for each method disassembled, >> but >> this is what was done before from what I understood. >> >> Thanks >> Jean-Philippe Bempel >> From jean-philippe at bempel.fr Thu Nov 21 13:18:22 2019 From: jean-philippe at bempel.fr (Jean-Philippe BEMPEL) Date: Thu, 21 Nov 2019 14:18:22 +0100 Subject: PrintAssembly: Passing options to hsdis does not work anymore In-Reply-To: <20a156d0-4190-39ee-5a3d-58c56bc495a2@oracle.com> References: <20a156d0-4190-39ee-5a3d-58c56bc495a2@oracle.com> Message-ID: Hello Vladimir, Thanks for filing this issue. PrintAssemblyOptions works for global hotspot settings like show-bytes, show-offset, show-pc or help, but for hsdis plugin, options are not passed to the plugin because the buffer is empty. Thanks On Thu, Nov 21, 2019 at 1:38 PM Vladimir Ivanov < vladimir.x.ivanov at oracle.com> wrote: > Anyway, filed JDK-8234583 to track it: > > https://bugs.openjdk.java.net/browse/JDK-8234583 > > Best regards, > Vladimir Ivanov > > On 21.11.2019 15:26, Vladimir Ivanov wrote: > > Thanks for the report, Jean-Philippe. > > > > My reading is -XX:PrintAssemblyOptions doesn't have any effect starting > > 13. Is it what you observe? > > > > Best regards, > > Vladimir Ivanov > > > > On 21.11.2019 15:19, Jean-Philippe BEMPEL wrote: > >> Hello, > >> > >> I have just found an issue with PrintAssemblyOptions since openjdk 13 > >> (still works with openjdk 12) > >> Before we could change the output of PrintAssembly thanks to > >> PrintAssemblyOptions > >> (for example switching to intel syntax instead of AT&T one) > >> So doing > >> java -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly > >> -XX:PrintAssemblyOptions=intel -version > >> with correct hsdis-*.[dll|so] should output assembly with intel syntax. > >> I have investigated this issue and seems related to the addition of > >> _optionParsed global flag: > >> > https://github.com/openjdk/jdk/blob/master/src/hotspot/share/compiler/disassembler.cpp#L417 > >> > >> > >> Removing this line fix the issue. > >> > >> decode_env is created each time a call to Disassembler::decode() is made > >> but parsing options is made only once. > >> _option_buf is initialized each time decode_env is created but options > >> are > >> not parsed for subsequent calls and _option_bug is not filled. > >> > >> Downside of my fix: options are reparsed for each method disassembled, > >> but > >> this is what was done before from what I understood. > >> > >> Thanks > >> Jean-Philippe Bempel > >> > From stefan.johansson at oracle.com Thu Nov 21 14:38:18 2019 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Thu, 21 Nov 2019 15:38:18 +0100 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range In-Reply-To: <8bf24f6a-b4e0-f3b1-10f6-19a34b9c6fc4@oracle.com> References: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> <8ced83cd-3374-4517-21b7-8f6401c1c81e@oracle.com> <290c83af-f554-9236-4f85-740736374e8d@oracle.com> <832C5850-7B54-43CB-A688-BEB1EAFDA437@oracle.com> <8bf24f6a-b4e0-f3b1-10f6-19a34b9c6fc4@oracle.com> Message-ID: Hi Thomas, On 2019-11-15 10:02, Thomas Schatzl wrote: > Hi Kim, > > On 14.11.19 23:52, Kim Barrett wrote: >>> On Nov 14, 2019, at 10:58 AM, Thomas Schatzl >>> wrote: >>> New webrevs: >>> >>> http://cr.openjdk.java.net/~tschatzl/8233702/webrev.1_to_2/ (diff) >>> http://cr.openjdk.java.net/~tschatzl/8233702/webrev.2/ (full) >>> >>> Passed hs-tier1-5. >>> >>> Thanks, >>> ? Thomas >> >> Looks good. Looks good, I really like this change. One nit, I would like the comment where we can't use clamp in threadLocalAllocBuffer.cpp to be a bit more specific saying that we will assert if using it. Now I could not figure out what would be different without looking at the impl of clamp. No need for an extra review though, push at will. Thanks, Stefan >> > > ? thanks for your review. > > Thomas From david.holmes at oracle.com Thu Nov 21 23:52:30 2019 From: david.holmes at oracle.com (David Holmes) Date: Fri, 22 Nov 2019 09:52:30 +1000 Subject: RFR: 8234562: Move OrderAccess::release_store*/load_acquire to Atomic In-Reply-To: <89e4bbcd-a7de-2e5b-14cf-73699d984a69@oracle.com> References: <89e4bbcd-a7de-2e5b-14cf-73699d984a69@oracle.com> Message-ID: Hi Stefan, This generally all seems fine. I have one concern about header file includes. There are not many changes here that change an include of orderAccess.hpp to atomic.hpp. So I think we may have missing includes, or at least have very indirect include paths. For example compiledMethod.cpp doesn't include atomic.hpp, nor does compiledMethod.inline.hpp, but the inline.hpp does include orderAccess.hpp which is no longer needed. Aside: I see some odd looking uses of load_acquire/release_store pairings :( Minor nits: src/hotspot/os_cpu/aix_ppc/atomic_aix_ppc.hpp seems to be an indentation issue: + T t = Atomic::load(p); + // Use twi-isync for load_acquire (faster than lwsync). + __asm__ __volatile__ ("twi 0,%0,0\n isync\n" : : "r" (t) : "memory"); + return t; --- src/hotspot/os_cpu/linux_ppc/atomic_linux_ppc.hpp seems to be an indentation issue: + __asm__ __volatile__ ("twi 0,%0,0\n isync\n" : : "r" (t) : "memory"); + return t; --- src/hotspot/os_cpu/windows_x86/atomic_windows_x86.hpp +// bound calls like release_store go through OrderAccess::load +// and OrderAccess::store which do volatile memory accesses. s/OrderAccess/Atomic/ I just realised this used to be technically correct given: class OrderAccess : private Atomic { but I personally never realized OrderAccess and Atomic were related this way! :) --- Thanks, David ----- On 21/11/2019 8:33 pm, Stefan Karlsson wrote: > Hi all, > > I'd like to propose that we move release_store, release_store_fence, and > load_acquire, from OrderAccess to Atomic. > > https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8234562 > > Atomic already has the relaxed store and load, among other functions for > concurrent access, but release_store, release_store_fence, and > load_acquire, are located in OrderAccess. > > After this change there's an inconsistency in the order of the > parameters in the store functions in the Atomic API: > > ?void store(T store_value, volatile D* dest) > ?void release_store(volatile D* dest, T store_value) > ?void release_store_fence(volatile D* dest, T store_value) > > I'd like to address that in a separate RFE, where I propose that we move > the dest parameter to the left for all the Atomic functions. See: > https://bugs.openjdk.java.net/browse/JDK-8234563 > > I've tested this patch with tier1-7. I've also built fastdebug on the > following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, > minimal, just to minimize any disruptions. > > Thanks, > StefanK From david.holmes at oracle.com Fri Nov 22 05:15:27 2019 From: david.holmes at oracle.com (David Holmes) Date: Fri, 22 Nov 2019 15:15:27 +1000 Subject: RFR: 8234563: Harmonize parameter order in Atomic In-Reply-To: <69edd7c6-93ed-24d8-d761-1fbe968b50d4@oracle.com> References: <69edd7c6-93ed-24d8-d761-1fbe968b50d4@oracle.com> Message-ID: Hi Stefan, On 21/11/2019 9:26 pm, Stefan Karlsson wrote: > Hi all, > > I'd like to propose a restructuring of the parameter order in Atomic. Okay - consistency is good, especially if it then aligns with external APIs. Thanks for doing this cleanup! > https://bugs.openjdk.java.net/browse/JDK-8234563 > > These are some of the functions used to concurrently write memory from > HotSpot C++ code: > > ?Atomic::store(value, destination); > ?OrderAccess::release_store(destination, value) > ?OrderAccess::release_store_fence(destination, value) > ?Atomic::add(value, destination); > ?Atomic::sub(value, destination); > ?Atomic::xchg(exchange_value, destination); > ?Atomic::cmpxchg(exchange_value, destination, compare_value); > > With the proposed JDK-8234562 change, this would look like: > > ?Atomic::store(value, destination); > ?Atomic::release_store(destination, value) > ?Atomic::release_store_fence(destination, value) > ?Atomic::add(value, destination); > ?Atomic::sub(value, destination); > ?Atomic::xchg(exchange_value, destination); > ?Atomic::cmpxchg(exchange_value, destination, compare_value); > > I'd like to propose that we move the destination parameter over to the > left, and the new value to the right. This would look like this: > > ?Atomic::store(destination, value); > ?Atomic::release_store(destination, value) > ?Atomic::release_store_fence(destination, value) > ?Atomic::add(destination, value); > ?Atomic::sub(destination, value); > ?Atomic::xchg(destination, exchange_value); > ?Atomic::cmpxchg(destination, compare_value, exchange_value); I was expecting Atomic::cmpxchg(destination, exchange_value, compare_value); as that would seem more consistent so that we always have: - arg 1 => destination - arg 2 => new (or delta) value but I guess this is consistent with external cmpxchg APIs :( > This would bring the Atomic API more in-line with the order for a normal > store: > > *destination = value; > > I've split this up into separate patches, each dealing with a separate > operation: > > Atomic::store: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.01.store/ Ok. > Atomic::add: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.02.add/ src/hotspot/os_cpu/solaris_x86/solaris_x86_64.il You updated the comments to show the parameters have changed order but the assembly code is unchanged - won't it need modification too? Or is it that the true call site for this still uses the original order ?? -- I was a bit confused about the changes involving Atomic::add_using_helper: +template +inline D Atomic::add_using_helper(Fn fn, D volatile* dest, I add_value) { return PrimitiveConversions::cast( fn(PrimitiveConversions::cast(add_value), reinterpret_cast(dest))); because you switched the Atomic API parameter order, but the underlying function being called still has the parameters passed in the old order. I'm not sure whether this is the right level to stop the change or whether it should have been pushed all the way down to e.g. os::atomic_add_func ? Otherwise seems okay. > Atomic::sub: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.03.sub/ Ok. > Atomic::xchg: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.04.xchg/ src/hotspot/os_cpu/solaris_x86/solaris_x86_64.il same issue with the assembly as per Atomic::add. --- Same issue/query with xchg_using_helper as add_using_helper. > Atomic::cmpxchg: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.05.cmpxchg/ src/hotspot/os_cpu/bsd_x86/bsd_x86_32.s src/hotspot/os_cpu/linux_x86/linux_x86_32.s src/hotspot/os_cpu/solaris_x86/solaris_x86_64.il Again queries about the actual assembly code. Thanks, David ----- > All sub-patches combined: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.all/ > > The patches applies on-top of the patch for JDK-8234562: > ?https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ > > I've tested this patch with tier1-7. I've also built fastdebug on the > following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, > minimal, just to minimize any disruptions. However, this is a moving > target and latest rebase of this has only been run on tier1. > > Thanks, > StefanK From thomas.schatzl at oracle.com Fri Nov 22 09:03:06 2019 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Fri, 22 Nov 2019 10:03:06 +0100 Subject: RFR (S): 8233702: Introduce helper function to clamp value to range In-Reply-To: References: <5e24eab5-7a81-3d14-31e4-7d81a10debff@oracle.com> <4F8382A7-2BAE-42D0-9F01-7E57CA588FF2@oracle.com> <8ced83cd-3374-4517-21b7-8f6401c1c81e@oracle.com> <290c83af-f554-9236-4f85-740736374e8d@oracle.com> <832C5850-7B54-43CB-A688-BEB1EAFDA437@oracle.com> <8bf24f6a-b4e0-f3b1-10f6-19a34b9c6fc4@oracle.com> Message-ID: Hi, On 21.11.19 15:38, Stefan Johansson wrote: > Hi Thomas, > > On 2019-11-15 10:02, Thomas Schatzl wrote: >> Hi Kim, >> >> On 14.11.19 23:52, Kim Barrett wrote: >>>> On Nov 14, 2019, at 10:58 AM, Thomas Schatzl >>>> wrote: >>>> New webrevs: >>>> >>>> http://cr.openjdk.java.net/~tschatzl/8233702/webrev.1_to_2/ (diff) >>>> http://cr.openjdk.java.net/~tschatzl/8233702/webrev.2/ (full) >>>> >>>> Passed hs-tier1-5. >>>> >>>> Thanks, >>>> ? Thomas >>> >>> Looks good. > Looks good, I really like this change. > > One nit, I would like the comment where we can't use clamp in > threadLocalAllocBuffer.cpp to be a bit more specific saying that we will > assert if using it. Now I could not figure out what would be different > without looking at the impl of clamp. > > No need for an extra review though, push at will. > > Thanks, > Stefan Done. Thanks for your review. Thomas From robbin.ehn at oracle.com Fri Nov 22 09:29:23 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Fri, 22 Nov 2019 10:29:23 +0100 Subject: RFR: 8234563: Harmonize parameter order in Atomic In-Reply-To: <69edd7c6-93ed-24d8-d761-1fbe968b50d4@oracle.com> References: <69edd7c6-93ed-24d8-d761-1fbe968b50d4@oracle.com> Message-ID: Thanks again, looks good! Much appreciated! /Robbin On 11/21/19 12:26 PM, Stefan Karlsson wrote: > Hi all, > > I'd like to propose a restructuring of the parameter order in Atomic. > > https://bugs.openjdk.java.net/browse/JDK-8234563 > > These are some of the functions used to concurrently write memory from HotSpot > C++ code: > > ?Atomic::store(value, destination); > ?OrderAccess::release_store(destination, value) > ?OrderAccess::release_store_fence(destination, value) > ?Atomic::add(value, destination); > ?Atomic::sub(value, destination); > ?Atomic::xchg(exchange_value, destination); > ?Atomic::cmpxchg(exchange_value, destination, compare_value); > > With the proposed JDK-8234562 change, this would look like: > > ?Atomic::store(value, destination); > ?Atomic::release_store(destination, value) > ?Atomic::release_store_fence(destination, value) > ?Atomic::add(value, destination); > ?Atomic::sub(value, destination); > ?Atomic::xchg(exchange_value, destination); > ?Atomic::cmpxchg(exchange_value, destination, compare_value); > > I'd like to propose that we move the destination parameter over to the left, and > the new value to the right. This would look like this: > > ?Atomic::store(destination, value); > ?Atomic::release_store(destination, value) > ?Atomic::release_store_fence(destination, value) > ?Atomic::add(destination, value); > ?Atomic::sub(destination, value); > ?Atomic::xchg(destination, exchange_value); > ?Atomic::cmpxchg(destination, compare_value, exchange_value); > > This would bring the Atomic API more in-line with the order for a normal store: > > *destination = value; > > I've split this up into separate patches, each dealing with a separate operation: > > Atomic::store: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.01.store/ > > Atomic::add: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.02.add/ > > Atomic::sub: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.03.sub/ > > Atomic::xchg: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.04.xchg/ > > Atomic::cmpxchg: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.05.cmpxchg/ > > All sub-patches combined: > ?https://cr.openjdk.java.net/~stefank/8234563/webrev.01.all/ > > The patches applies on-top of the patch for JDK-8234562: > ?https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ > > I've tested this patch with tier1-7. I've also built fastdebug on the following > configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, minimal, just to > minimize any disruptions. However, this is a moving target and latest rebase of > this has only been run on tier1. > > Thanks, > StefanK From robbin.ehn at oracle.com Fri Nov 22 09:28:18 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Fri, 22 Nov 2019 10:28:18 +0100 Subject: RFR: 8234562: Move OrderAccess::release_store*/load_acquire to Atomic In-Reply-To: <89e4bbcd-a7de-2e5b-14cf-73699d984a69@oracle.com> References: <89e4bbcd-a7de-2e5b-14cf-73699d984a69@oracle.com> Message-ID: Thanks for fixing this! On 11/21/19 11:33 AM, Stefan Karlsson wrote: > I've tested this patch with tier1-7. I've also built fastdebug on the following > configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, minimal, just to > minimize any disruptions. Great, looks good. Thanks, Robbin > > Thanks, > StefanK From erik.osterlund at oracle.com Fri Nov 22 10:49:11 2019 From: erik.osterlund at oracle.com (erik.osterlund at oracle.com) Date: Fri, 22 Nov 2019 11:49:11 +0100 Subject: RFR: 8234426: Sweeper should not CompiledIC::set_to_clean with ICStubs for is_unloading() nmethods Message-ID: Hi, When the sweeper processes an nmethod, it will clean inline caches if it is_alive(). Today, the cleaning will utilize transitional states (using ICStubs) if the nmethod is_alive(), which is always true for the sweeper. If it runs out of ICStubs, it might have to safepoint to refill them. When it does, the currently processed nmethod might be is_unloading(). That is not a problem for the GC per se (safepoint operation fusing with mark end), but it is a problem for heap walkers that get confused that an nmethod reachable from a thread is unloading and hence has dead oops in it. This sweeper nmethod is the *only* nmethod that violates an invariant that nmethods reachable from threads (Thread::nmethods_do) are not unloading. By simply changing the condition to not use ICStubs when the nmethod is_unloading(), we get this neat invariant, and code gets less confused about this. Bug: https://bugs.openjdk.java.net/browse/JDK-8234426 Webrev: http://cr.openjdk.java.net/~eosterlund/8234426/webrev.00/ Thanks, /Erik From stefan.karlsson at oracle.com Fri Nov 22 10:55:14 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 22 Nov 2019 11:55:14 +0100 Subject: RFR: 8234562: Move OrderAccess::release_store*/load_acquire to Atomic In-Reply-To: References: <89e4bbcd-a7de-2e5b-14cf-73699d984a69@oracle.com> Message-ID: <1183c399-a83d-1d73-947c-bfd693a07ae3@oracle.com> Hi David, On 2019-11-22 00:52, David Holmes wrote: > Hi Stefan, > > This generally all seems fine. Thanks. > > I have one concern about header file includes. There are not many > changes here that change an include of orderAccess.hpp to atomic.hpp. So > I think we may have missing includes, or at least have very indirect > include paths. For example compiledMethod.cpp doesn't include > atomic.hpp, nor does compiledMethod.inline.hpp, but the inline.hpp does > include orderAccess.hpp which is no longer needed. I agree. I thought about doing a full cleanup of this, but backed away, because I didn't want to redo all builds on all platforms and configs Would you be OK if I created a cleanup patch that can be pushed after these changes? Last time I did a complete include cleanup it took over a week to get that reviewed! > > Aside: I see some odd looking uses of load_acquire/release_store > pairings :( > > Minor nits: > > src/hotspot/os_cpu/aix_ppc/atomic_aix_ppc.hpp > > seems to be an indentation issue: > > +??? T t = Atomic::load(p); > +??? // Use twi-isync for load_acquire (faster than lwsync). > +??? __asm__ __volatile__ ("twi 0,%0,0\n isync\n" : : "r" (t) : "memory"); > +??? return t; > fixed > --- > > src/hotspot/os_cpu/linux_ppc/atomic_linux_ppc.hpp > > seems to be an indentation issue: > > +??? __asm__ __volatile__ ("twi 0,%0,0\n isync\n" : : "r" (t) : "memory"); > +??? return t; > fixed > --- > > src/hotspot/os_cpu/windows_x86/atomic_windows_x86.hpp > > +// bound calls like release_store go through OrderAccess::load > +// and OrderAccess::store which do volatile memory accesses. > > s/OrderAccess/Atomic/ > fixed > I just realised this used to be technically correct given: > > class OrderAccess : private Atomic { > > but I personally never realized OrderAccess and Atomic were related this > way! :) > :) Thanks for reviewing, StefanK > --- > > Thanks, > David > ----- > > On 21/11/2019 8:33 pm, Stefan Karlsson wrote: >> Hi all, >> >> I'd like to propose that we move release_store, release_store_fence, >> and load_acquire, from OrderAccess to Atomic. >> >> https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8234562 >> >> Atomic already has the relaxed store and load, among other functions >> for concurrent access, but release_store, release_store_fence, and >> load_acquire, are located in OrderAccess. >> >> After this change there's an inconsistency in the order of the >> parameters in the store functions in the Atomic API: >> >> ??void store(T store_value, volatile D* dest) >> ??void release_store(volatile D* dest, T store_value) >> ??void release_store_fence(volatile D* dest, T store_value) >> >> I'd like to address that in a separate RFE, where I propose that we >> move the dest parameter to the left for all the Atomic functions. See: >> https://bugs.openjdk.java.net/browse/JDK-8234563 >> >> I've tested this patch with tier1-7. I've also built fastdebug on the >> following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, >> minimal, just to minimize any disruptions. >> >> Thanks, >> StefanK From stefan.karlsson at oracle.com Fri Nov 22 10:55:33 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 22 Nov 2019 11:55:33 +0100 Subject: RFR: 8234562: Move OrderAccess::release_store*/load_acquire to Atomic In-Reply-To: References: <89e4bbcd-a7de-2e5b-14cf-73699d984a69@oracle.com> Message-ID: <042911b8-672d-5e43-6c21-f90ede031acb@oracle.com> Thanks, Robbin. StefanK On 2019-11-22 10:28, Robbin Ehn wrote: > Thanks for fixing this! > > On 11/21/19 11:33 AM, Stefan Karlsson wrote: >> I've tested this patch with tier1-7. I've also built fastdebug on the >> following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, >> minimal, just to minimize any disruptions. > > Great, looks good. > > Thanks, Robbin > >> >> Thanks, >> StefanK From stefan.karlsson at oracle.com Fri Nov 22 11:14:59 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 22 Nov 2019 12:14:59 +0100 Subject: RFR: 8234563: Harmonize parameter order in Atomic In-Reply-To: References: <69edd7c6-93ed-24d8-d761-1fbe968b50d4@oracle.com> Message-ID: <913d278e-9719-4ff1-3caf-1bed1e19d510@oracle.com> On 2019-11-22 06:15, David Holmes wrote: > Hi Stefan, > > On 21/11/2019 9:26 pm, Stefan Karlsson wrote: >> Hi all, >> >> I'd like to propose a restructuring of the parameter order in Atomic. > > Okay - consistency is good, especially if it then aligns with external > APIs. Thanks for doing this cleanup! > >> https://bugs.openjdk.java.net/browse/JDK-8234563 >> >> These are some of the functions used to concurrently write memory from >> HotSpot C++ code: >> >> ??Atomic::store(value, destination); >> ??OrderAccess::release_store(destination, value) >> ??OrderAccess::release_store_fence(destination, value) >> ??Atomic::add(value, destination); >> ??Atomic::sub(value, destination); >> ??Atomic::xchg(exchange_value, destination); >> ??Atomic::cmpxchg(exchange_value, destination, compare_value); >> >> With the proposed JDK-8234562 change, this would look like: >> >> ??Atomic::store(value, destination); >> ??Atomic::release_store(destination, value) >> ??Atomic::release_store_fence(destination, value) >> ??Atomic::add(value, destination); >> ??Atomic::sub(value, destination); >> ??Atomic::xchg(exchange_value, destination); >> ??Atomic::cmpxchg(exchange_value, destination, compare_value); >> >> I'd like to propose that we move the destination parameter over to the >> left, and the new value to the right. This would look like this: >> >> ??Atomic::store(destination, value); >> ??Atomic::release_store(destination, value) >> ??Atomic::release_store_fence(destination, value) >> ??Atomic::add(destination, value); >> ??Atomic::sub(destination, value); >> ??Atomic::xchg(destination, exchange_value); >> ??Atomic::cmpxchg(destination, compare_value, exchange_value); > > I was expecting > > Atomic::cmpxchg(destination, exchange_value, compare_value); > > as that would seem more consistent so that we always have: > - arg 1 => destination > - arg 2 => new (or delta) value > > but I guess this is consistent with external cmpxchg APIs :( > >> This would bring the Atomic API more in-line with the order for a >> normal store: >> >> *destination = value; >> >> I've split this up into separate patches, each dealing with a separate >> operation: >> >> Atomic::store: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.01.store/ > > Ok. > >> Atomic::add: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.02.add/ > > src/hotspot/os_cpu/solaris_x86/solaris_x86_64.il > > You updated the comments to show the parameters have changed order but > the assembly code is unchanged - won't it need modification too? Or is > it that the true call site for this still uses the original order ?? > Right. I've changed the order for the C++ functions, but left inline assembly that assumes a given parameter order. I didn't want to change that with this patch. Here's where the arguments are swapped: // Not using add_using_helper; see comment for cmpxchg. template<> template inline D Atomic::PlatformAdd<4>::add_and_fetch(D volatile* dest, I add_value, atomic_memory_order order) const { STATIC_ASSERT(4 == sizeof(I)); STATIC_ASSERT(4 == sizeof(D)); return PrimitiveConversions::cast( _Atomic_add(PrimitiveConversions::cast(add_value), reinterpret_cast(dest))); } So, the comment refers to the high-level Atomic::add function, not the _Atomic_add implementation. I agree that it's confusing, and thought about that, but I don't have a good suggestion how to change these comments. If you have a good suggestion on what to changes this to, I can update all these comments. > -- > > I was a bit confused about the changes involving Atomic::add_using_helper: > > +template > +inline D Atomic::add_using_helper(Fn fn, D volatile* dest, I add_value) { > ?? return PrimitiveConversions::cast( > ???? fn(PrimitiveConversions::cast(add_value), > ??????? reinterpret_cast(dest))); > > because you switched the Atomic API parameter order, but the underlying > function being called still has the parameters passed in the old order. > I'm not sure whether this is the right level to stop the change or > whether it should have been pushed all the way down to e.g. > os::atomic_add_func ? If I wanted to push this further down I would have to start to change assembly code that relied on a specific order. That's why I chose to stop here. See: add_func_t* os::atomic_add_func = os::atomic_add_bootstrap; --- int32_t os::atomic_add_bootstrap(int32_t add_value, volatile int32_t* dest) { // try to use the stub: add_func_t* func = CAST_TO_FN_PTR(add_func_t*, StubRoutines::atomic_add_entry()); if (func != NULL) { os::atomic_add_func = func; return (*func)(add_value, dest); } assert(Threads::number_of_threads() == 0, "for bootstrap only"); return (*dest) += add_value; } --- StubRoutines::_atomic_add_entry = generate_atomic_add(); --- address generate_atomic_add() { StubCodeMark mark(this, "StubRoutines", "atomic_add"); address start = __ pc(); __ movl(rax, c_rarg0); __ lock(); __ xaddl(Address(c_rarg1, 0), c_rarg0); __ addl(rax, c_rarg0); __ ret(0); return start; } I'd prefer to leave that exercise to someone else. > > Otherwise seems okay. > >> Atomic::sub: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.03.sub/ > > Ok. > >> Atomic::xchg: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.04.xchg/ > > src/hotspot/os_cpu/solaris_x86/solaris_x86_64.il > > same issue with the assembly as per Atomic::add. > > --- > > Same issue/query with xchg_using_helper as add_using_helper. > >> Atomic::cmpxchg: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.05.cmpxchg/ > > src/hotspot/os_cpu/bsd_x86/bsd_x86_32.s > src/hotspot/os_cpu/linux_x86/linux_x86_32.s > src/hotspot/os_cpu/solaris_x86/solaris_x86_64.il > > Again queries about the actual assembly code. Thanks for reviewing this, StefanK > > Thanks, > David > ----- > >> All sub-patches combined: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.all/ >> >> The patches applies on-top of the patch for JDK-8234562: >> ??https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ >> >> I've tested this patch with tier1-7. I've also built fastdebug on the >> following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, >> minimal, just to minimize any disruptions. However, this is a moving >> target and latest rebase of this has only been run on tier1. >> >> Thanks, >> StefanK From stefan.karlsson at oracle.com Fri Nov 22 11:15:11 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 22 Nov 2019 12:15:11 +0100 Subject: RFR: 8234563: Harmonize parameter order in Atomic In-Reply-To: References: <69edd7c6-93ed-24d8-d761-1fbe968b50d4@oracle.com> Message-ID: <7e214db1-5642-9009-3450-9d428da190d1@oracle.com> Thanks, Robbin. StefanK On 2019-11-22 10:29, Robbin Ehn wrote: > Thanks again, looks good! > > Much appreciated! > > /Robbin > > On 11/21/19 12:26 PM, Stefan Karlsson wrote: >> Hi all, >> >> I'd like to propose a restructuring of the parameter order in Atomic. >> >> https://bugs.openjdk.java.net/browse/JDK-8234563 >> >> These are some of the functions used to concurrently write memory from >> HotSpot C++ code: >> >> ??Atomic::store(value, destination); >> ??OrderAccess::release_store(destination, value) >> ??OrderAccess::release_store_fence(destination, value) >> ??Atomic::add(value, destination); >> ??Atomic::sub(value, destination); >> ??Atomic::xchg(exchange_value, destination); >> ??Atomic::cmpxchg(exchange_value, destination, compare_value); >> >> With the proposed JDK-8234562 change, this would look like: >> >> ??Atomic::store(value, destination); >> ??Atomic::release_store(destination, value) >> ??Atomic::release_store_fence(destination, value) >> ??Atomic::add(value, destination); >> ??Atomic::sub(value, destination); >> ??Atomic::xchg(exchange_value, destination); >> ??Atomic::cmpxchg(exchange_value, destination, compare_value); >> >> I'd like to propose that we move the destination parameter over to the >> left, and the new value to the right. This would look like this: >> >> ??Atomic::store(destination, value); >> ??Atomic::release_store(destination, value) >> ??Atomic::release_store_fence(destination, value) >> ??Atomic::add(destination, value); >> ??Atomic::sub(destination, value); >> ??Atomic::xchg(destination, exchange_value); >> ??Atomic::cmpxchg(destination, compare_value, exchange_value); >> >> This would bring the Atomic API more in-line with the order for a >> normal store: >> >> *destination = value; >> >> I've split this up into separate patches, each dealing with a separate >> operation: >> >> Atomic::store: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.01.store/ >> >> Atomic::add: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.02.add/ >> >> Atomic::sub: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.03.sub/ >> >> Atomic::xchg: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.04.xchg/ >> >> Atomic::cmpxchg: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.05.cmpxchg/ >> >> All sub-patches combined: >> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.all/ >> >> The patches applies on-top of the patch for JDK-8234562: >> ??https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ >> >> I've tested this patch with tier1-7. I've also built fastdebug on the >> following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, >> minimal, just to minimize any disruptions. However, this is a moving >> target and latest rebase of this has only been run on tier1. >> >> Thanks, >> StefanK From rkennke at redhat.com Fri Nov 22 12:03:01 2019 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 22 Nov 2019 13:03:01 +0100 Subject: RFR: 8234426: Sweeper should not CompiledIC::set_to_clean with ICStubs for is_unloading() nmethods In-Reply-To: References: Message-ID: <7415d366-f412-d449-b65b-a22c62645c53@redhat.com> We also see a problem that might be related to this with Shenandoah. I'll test your change and see if it resolves that problem. Thanks, Roman > Hi, > > When the sweeper processes an nmethod, it will clean inline caches if it > is_alive(). > Today, the cleaning will utilize transitional states (using ICStubs) if > the nmethod is_alive(), > which is always true for the sweeper. If it runs out of ICStubs, it > might have to safepoint > to refill them. When it does, the currently processed nmethod might be > is_unloading(). > That is not a problem for the GC per se (safepoint operation fusing with > mark end), but it > is a problem for heap walkers that get confused that an nmethod > reachable from a thread is unloading > and hence has dead oops in it. This sweeper nmethod is the *only* > nmethod that violates an > invariant that nmethods reachable from threads (Thread::nmethods_do) are > not unloading. > > By simply changing the condition to not use ICStubs when the nmethod > is_unloading(), we > get this neat invariant, and code gets less confused about this. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8234426 > > Webrev: > http://cr.openjdk.java.net/~eosterlund/8234426/webrev.00/ > > Thanks, > /Erik > From erik.osterlund at oracle.com Fri Nov 22 15:03:52 2019 From: erik.osterlund at oracle.com (erik.osterlund at oracle.com) Date: Fri, 22 Nov 2019 16:03:52 +0100 Subject: RFR: 8234662: Sweeper should keep current nmethod alive before yielding for ICStub refills Message-ID: Hi, Today, when GCs scan the stack, all nmethods roots found from threads are nmethods that have gone through an nmethod entry barrier (in the case of concurrent class unloading), except the super special sweeper nmethod that is currently being processed when it runs out of ICStubs and need to safepoint to refill ICStubs, which with the fantastic safepoint coalescing optimization that we all know and love, can get fused into a GC safepoint, which will find the sweeper nmethod and be confused why it has not gone through an entry barrier before it yielded to the safepoint like everyone else. This causes some headache, and I would like to harmonize this better by simply calling the nmethod entry barrier on that nmethod in the sweeper, before yielding to the safepoint. Then there is no need to treat it as a special case. Bug: https://bugs.openjdk.java.net/browse/JDK-8234662 Webrev: http://cr.openjdk.java.net/~eosterlund/8234662/webrev.00/ Thanks. /Erik From volker.simonis at gmail.com Fri Nov 22 17:49:46 2019 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 22 Nov 2019 18:49:46 +0100 Subject: Question on "JEP: JVMCI based JIT Compiler pre-compiled as shared library" Message-ID: Hi, I have a question related to the JEP draft "JVMCI based JIT Compiler pre-compiled as shared library". Back in April/May "JDK-8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library", which was a pretty huge change, has been pushed to jdk13. I wonder how much of the envisioned JEP functionality has already been delivered by change "JDK-8220623" and what is still missing in order to fully support pre-compiled JIT compilers as a shared library. Has most of the work already been done by "JDK-8220623"? I don't see any more sub-tasks on JEP JDK-8223220 and neither do I see any progress on that JEP. From what I saw, it even hasn't been proposed and/or discussed on an OpenJDK mailing list yet. Thank you and best regards, Volker From vladimir.kozlov at oracle.com Fri Nov 22 18:23:59 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 22 Nov 2019 10:23:59 -0800 Subject: Question on "JEP: JVMCI based JIT Compiler pre-compiled as shared library" In-Reply-To: References: Message-ID: <4c28f33e-a72e-b3cd-87de-f6266d9caae2@oracle.com> Hi Volker Most of JVMCI changes for libgraal are done and it is in JDK 13/14. I don't expect any more major changes in JVMCI, only bugs fixes. About plans. As you remember during this JVMLS we talked about our plan to transition to Graal from C2 in a future. And using AOT'ed (SVM'ed) Graal (libgraal) is important part of this transition, 8220623 is part of that. Most work is done by GraalVM group in Oracle. They just released 19.3 version of GraalVM that is based on JDK 11 and using libgraal by default which use JVMCI changes from 8220623. On OpenJDK side we plan to release libgraal EA based on Metropolis repository, as I said during JVMLS. This is tracked by RFE [1]. RFE's description is outdated since Metropolis repo is based on JDK 14 now and I need to update it. There is also Graal's PR, Bob V. is working on, to adjust upstream Graal/SVM code to API changes done in JDK 14. I am working on RFE and I will be pushing some small changes into Metropolis repo for that. It is coming slower than we wish but it is progressing. Regards, Vladimir [1] https://bugs.openjdk.java.net/browse/JDK-8230341 On 11/22/19 9:49 AM, Volker Simonis wrote: > Hi, > > I have a question related to the JEP draft "JVMCI based JIT Compiler > pre-compiled as shared library". Back in April/May "JDK-8220623: > [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into > shared library", which was a pretty huge change, has been pushed to > jdk13. > > I wonder how much of the envisioned JEP functionality has already been > delivered by change "JDK-8220623" and what is still missing in order > to fully support pre-compiled JIT compilers as a shared library. Has > most of the work already been done by "JDK-8220623"? I don't see any > more sub-tasks on JEP JDK-8223220 and neither do I see any progress on > that JEP. From what I saw, it even hasn't been proposed and/or > discussed on an OpenJDK mailing list yet. > > Thank you and best regards, > Volker > From david.holmes at oracle.com Mon Nov 25 00:12:09 2019 From: david.holmes at oracle.com (David Holmes) Date: Mon, 25 Nov 2019 10:12:09 +1000 Subject: RFR: 8234562: Move OrderAccess::release_store*/load_acquire to Atomic In-Reply-To: <1183c399-a83d-1d73-947c-bfd693a07ae3@oracle.com> References: <89e4bbcd-a7de-2e5b-14cf-73699d984a69@oracle.com> <1183c399-a83d-1d73-947c-bfd693a07ae3@oracle.com> Message-ID: On 22/11/2019 8:55 pm, Stefan Karlsson wrote: > Hi David, > > On 2019-11-22 00:52, David Holmes wrote: >> Hi Stefan, >> >> This generally all seems fine. > > Thanks. > >> >> I have one concern about header file includes. There are not many >> changes here that change an include of orderAccess.hpp to atomic.hpp. >> So I think we may have missing includes, or at least have very >> indirect include paths. For example compiledMethod.cpp doesn't include >> atomic.hpp, nor does compiledMethod.inline.hpp, but the inline.hpp >> does include orderAccess.hpp which is no longer needed. > > I agree. I thought about doing a full cleanup of this, but backed away, > because I didn't want to redo all builds on all platforms and configs > > Would you be OK if I created a cleanup patch that can be pushed after > these changes? Last time I did a complete include cleanup it took over a > week to get that reviewed! As long as it builds on all platforms with and without PCH. I don't want this to trigger build failures for anyone. Thanks, David ----- >> >> Aside: I see some odd looking uses of load_acquire/release_store >> pairings :( >> >> Minor nits: >> >> src/hotspot/os_cpu/aix_ppc/atomic_aix_ppc.hpp >> >> seems to be an indentation issue: >> >> +??? T t = Atomic::load(p); >> +??? // Use twi-isync for load_acquire (faster than lwsync). >> +??? __asm__ __volatile__ ("twi 0,%0,0\n isync\n" : : "r" (t) : >> "memory"); >> +??? return t; >> > > fixed > >> --- >> >> src/hotspot/os_cpu/linux_ppc/atomic_linux_ppc.hpp >> >> seems to be an indentation issue: >> >> +??? __asm__ __volatile__ ("twi 0,%0,0\n isync\n" : : "r" (t) : >> "memory"); >> +??? return t; >> > > fixed > >> --- >> >> src/hotspot/os_cpu/windows_x86/atomic_windows_x86.hpp >> >> +// bound calls like release_store go through OrderAccess::load >> +// and OrderAccess::store which do volatile memory accesses. >> >> s/OrderAccess/Atomic/ >> > > fixed > >> I just realised this used to be technically correct given: >> >> class OrderAccess : private Atomic { >> >> but I personally never realized OrderAccess and Atomic were related >> this way! :) >> > > :) > > Thanks for reviewing, > StefanK > >> --- >> >> Thanks, >> David >> ----- >> >> On 21/11/2019 8:33 pm, Stefan Karlsson wrote: >>> Hi all, >>> >>> I'd like to propose that we move release_store, release_store_fence, >>> and load_acquire, from OrderAccess to Atomic. >>> >>> https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ >>> https://bugs.openjdk.java.net/browse/JDK-8234562 >>> >>> Atomic already has the relaxed store and load, among other functions >>> for concurrent access, but release_store, release_store_fence, and >>> load_acquire, are located in OrderAccess. >>> >>> After this change there's an inconsistency in the order of the >>> parameters in the store functions in the Atomic API: >>> >>> ??void store(T store_value, volatile D* dest) >>> ??void release_store(volatile D* dest, T store_value) >>> ??void release_store_fence(volatile D* dest, T store_value) >>> >>> I'd like to address that in a separate RFE, where I propose that we >>> move the dest parameter to the left for all the Atomic functions. See: >>> https://bugs.openjdk.java.net/browse/JDK-8234563 >>> >>> I've tested this patch with tier1-7. I've also built fastdebug on the >>> following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, >>> minimal, just to minimize any disruptions. >>> >>> Thanks, >>> StefanK From david.holmes at oracle.com Mon Nov 25 04:29:27 2019 From: david.holmes at oracle.com (David Holmes) Date: Mon, 25 Nov 2019 14:29:27 +1000 Subject: RFR: 8234563: Harmonize parameter order in Atomic In-Reply-To: <913d278e-9719-4ff1-3caf-1bed1e19d510@oracle.com> References: <69edd7c6-93ed-24d8-d761-1fbe968b50d4@oracle.com> <913d278e-9719-4ff1-3caf-1bed1e19d510@oracle.com> Message-ID: <8f00e619-da63-df08-fefa-ff199179cea4@oracle.com> Hi Stefan, On 22/11/2019 9:14 pm, Stefan Karlsson wrote: > On 2019-11-22 06:15, David Holmes wrote: >> Hi Stefan, >> >> On 21/11/2019 9:26 pm, Stefan Karlsson wrote: >>> Hi all, >>> >>> I'd like to propose a restructuring of the parameter order in Atomic. >> >> Okay - consistency is good, especially if it then aligns with external >> APIs. Thanks for doing this cleanup! >> >>> https://bugs.openjdk.java.net/browse/JDK-8234563 >>> >>> These are some of the functions used to concurrently write memory >>> from HotSpot C++ code: >>> >>> ??Atomic::store(value, destination); >>> ??OrderAccess::release_store(destination, value) >>> ??OrderAccess::release_store_fence(destination, value) >>> ??Atomic::add(value, destination); >>> ??Atomic::sub(value, destination); >>> ??Atomic::xchg(exchange_value, destination); >>> ??Atomic::cmpxchg(exchange_value, destination, compare_value); >>> >>> With the proposed JDK-8234562 change, this would look like: >>> >>> ??Atomic::store(value, destination); >>> ??Atomic::release_store(destination, value) >>> ??Atomic::release_store_fence(destination, value) >>> ??Atomic::add(value, destination); >>> ??Atomic::sub(value, destination); >>> ??Atomic::xchg(exchange_value, destination); >>> ??Atomic::cmpxchg(exchange_value, destination, compare_value); >>> >>> I'd like to propose that we move the destination parameter over to >>> the left, and the new value to the right. This would look like this: >>> >>> ??Atomic::store(destination, value); >>> ??Atomic::release_store(destination, value) >>> ??Atomic::release_store_fence(destination, value) >>> ??Atomic::add(destination, value); >>> ??Atomic::sub(destination, value); >>> ??Atomic::xchg(destination, exchange_value); >>> ??Atomic::cmpxchg(destination, compare_value, exchange_value); >> >> I was expecting >> >> Atomic::cmpxchg(destination, exchange_value, compare_value); >> >> as that would seem more consistent so that we always have: >> - arg 1 => destination >> - arg 2 => new (or delta) value >> >> but I guess this is consistent with external cmpxchg APIs :( >> >>> This would bring the Atomic API more in-line with the order for a >>> normal store: >>> >>> *destination = value; >>> >>> I've split this up into separate patches, each dealing with a >>> separate operation: >>> >>> Atomic::store: >>> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.01.store/ >> >> Ok. >> >>> Atomic::add: >>> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.02.add/ >> >> src/hotspot/os_cpu/solaris_x86/solaris_x86_64.il >> >> You updated the comments to show the parameters have changed order but >> the assembly code is unchanged - won't it need modification too? Or is >> it that the true call site for this still uses the original order ?? >> > > Right. I've changed the order for the C++ functions, but left inline > assembly that assumes a given parameter order. I didn't want to change > that with this patch. Here's where the arguments are swapped: > > // Not using add_using_helper; see comment for cmpxchg. > template<> > template > inline D Atomic::PlatformAdd<4>::add_and_fetch(D volatile* dest, I > add_value, > ?????????????????????????????????????????????? atomic_memory_order > order) const { > ? STATIC_ASSERT(4 == sizeof(I)); > ? STATIC_ASSERT(4 == sizeof(D)); > ? return PrimitiveConversions::cast( > ??? _Atomic_add(PrimitiveConversions::cast(add_value), > ??????????????? reinterpret_cast(dest))); > } Thanks for pointing that out. > So, the comment refers to the high-level Atomic::add function, not the > _Atomic_add implementation. I agree that it's confusing, and thought > about that, but I don't have a good suggestion how to change these > comments. If you have a good suggestion on what to changes this to, I > can update all these comments. How about: // Support for jint Atomic::add(volatile jint* dest, jint add_value) // called via _Atomic_add(jint add_value, volatile jint* dest) or perhaps // Implementation of jint _Atomic_add(jint add_value, volatile jint* dest) // used by Atomic::add(volatile jint* dest, jint add_value) >> -- >> >> I was a bit confused about the changes involving >> Atomic::add_using_helper: >> >> +template >> +inline D Atomic::add_using_helper(Fn fn, D volatile* dest, I >> add_value) { >> ??? return PrimitiveConversions::cast( >> ????? fn(PrimitiveConversions::cast(add_value), >> ???????? reinterpret_cast(dest))); >> >> because you switched the Atomic API parameter order, but the >> underlying function being called still has the parameters passed in >> the old order. I'm not sure whether this is the right level to stop >> the change or whether it should have been pushed all the way down to >> e.g. os::atomic_add_func ? > > If I wanted to push this further down I would have to start to change > assembly code that relied on a specific order. That's why I chose to > stop here. Understood. > See: > add_func_t*????????? os::atomic_add_func????????? = > os::atomic_add_bootstrap; > > --- > > int32_t os::atomic_add_bootstrap(int32_t add_value, volatile int32_t* > dest) { > ? // try to use the stub: > ? add_func_t* func = CAST_TO_FN_PTR(add_func_t*, > StubRoutines::atomic_add_entry()); > ? if (func != NULL) { > ??? os::atomic_add_func = func; > ??? return (*func)(add_value, dest); > ? } > ? assert(Threads::number_of_threads() == 0, "for bootstrap only"); > ? return (*dest) += add_value; > } > > --- > > StubRoutines::_atomic_add_entry?????????? = generate_atomic_add(); > > --- > ? address generate_atomic_add() { > ??? StubCodeMark mark(this, "StubRoutines", "atomic_add"); > ??? address start = __ pc(); > ??? __ movl(rax, c_rarg0); > ??? __ lock(); > ??? __ xaddl(Address(c_rarg1, 0), c_rarg0); > ??? __ addl(rax, c_rarg0); > ??? __ ret(0); > ??? return start; > ? } > > I'd prefer to leave that exercise to someone else. Fair enough :) Thanks, David ----- > >> >> Otherwise seems okay. >> >>> Atomic::sub: >>> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.03.sub/ >> >> Ok. >> >>> Atomic::xchg: >>> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.04.xchg/ >> >> src/hotspot/os_cpu/solaris_x86/solaris_x86_64.il >> >> same issue with the assembly as per Atomic::add. >> >> --- >> >> Same issue/query with xchg_using_helper as add_using_helper. >> >>> Atomic::cmpxchg: >>> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.05.cmpxchg/ >> >> src/hotspot/os_cpu/bsd_x86/bsd_x86_32.s >> src/hotspot/os_cpu/linux_x86/linux_x86_32.s >> src/hotspot/os_cpu/solaris_x86/solaris_x86_64.il >> >> Again queries about the actual assembly code. > > Thanks for reviewing this, > StefanK > >> >> Thanks, >> David >> ----- >> >>> All sub-patches combined: >>> ??https://cr.openjdk.java.net/~stefank/8234563/webrev.01.all/ >>> >>> The patches applies on-top of the patch for JDK-8234562: >>> ??https://cr.openjdk.java.net/~stefank/8234562/webrev.01/ >>> >>> I've tested this patch with tier1-7. I've also built fastdebug on the >>> following configs: aarch64, arm32, ppc64le, s390x, shenandoah, zero, >>> minimal, just to minimize any disruptions. However, this is a moving >>> target and latest rebase of this has only been run on tier1. >>> >>> Thanks, >>> StefanK From david.holmes at oracle.com Mon Nov 25 07:03:06 2019 From: david.holmes at oracle.com (David Holmes) Date: Mon, 25 Nov 2019 17:03:06 +1000 Subject: RFR: 8234397: add OS uptime information to os::print_os_info output In-Reply-To: References: Message-ID: <035e1ba6-b7bf-4681-953c-fa8fd37667ae@oracle.com> Hi Matthias, On 21/11/2019 9:22 pm, Baesken, Matthias wrote: > Hi David , > >> >> Hi Matthias, >> >> On 21/11/2019 7:24 pm, Baesken, Matthias wrote: >>> Hello, please review this small addition to os::print_os_info . >>> >>> Currently os::print_os_info outputs various interesting OS information, >>> The output is platforms dependent, on Linux currently the following >> information is printed : >>> distro, uname , some important libversions, some limits, load average, >> memory info, info about /proc/sys , container and virtualization details and >> steal ticks. >>> The OS uptime would be a helpful addition. >> >> I'd be interested to hear an example of this. >> > > One example that occurred last week - my colleague Christoph and me were browsing through an hs_err file of a crash on AIX . > When looking into the hs_err we wanted to know the uptime because our latest fontconfig - patches (for getting rid of the crash) needed a reboot too to really work . > Unfortunately we could not find the info , and we were disappointed ( then we noticed the crash is from OpenJDK and not our internal JVM ). > > >>> Bug/webrev : >>> https://bugs.openjdk.java.net/browse/JDK-8234397 >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.1/ >> >> Can Linux not use the POSIX version? >> > > Unfortunately the posix code does not give the desired result on Linux (at least on my test machines). The comment in the posix code mentions that it doesn't work on macOS but doesn't say anything about Linux. Has it been tested on Solaris? I'm really unsure about this code and am hoping someone more knowledgeable in this areas can chime in. I'd be less concerned if there was a single POSIX implementation that worked everywhere. :( Though I have my general concern about adding yet another potential point of failure in the error reporting logic. Thanks, David > Best regards, Matthias > From matthias.baesken at sap.com Mon Nov 25 08:06:42 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 25 Nov 2019 08:06:42 +0000 Subject: RFR: 8234397: add OS uptime information to os::print_os_info output In-Reply-To: <035e1ba6-b7bf-4681-953c-fa8fd37667ae@oracle.com> References: <035e1ba6-b7bf-4681-953c-fa8fd37667ae@oracle.com> Message-ID: > > The comment in the posix code mentions that it doesn't work on macOS but > doesn't say anything about Linux. Has it been tested on Solaris? > Hi David, it works on Solaris . I think I should adjust the comment (saying macOS AND Linux) . Best regards, Matthias > > > > One example that occurred last week - my colleague Christoph and me > were browsing through an hs_err file of a crash on AIX . > > When looking into the hs_err we wanted to know the uptime because > our latest fontconfig - patches (for getting rid of the crash) needed a > reboot too to really work . > > Unfortunately we could not find the info , and we were disappointed ( > then we noticed the crash is from OpenJDK and not our internal JVM ). > > > > > >>> Bug/webrev : > >>> https://bugs.openjdk.java.net/browse/JDK-8234397 > >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.1/ > >> > >> Can Linux not use the POSIX version? > >> > > > > Unfortunately the posix code does not give the desired result on Linux (at > least on my test machines). > > The comment in the posix code mentions that it doesn't work on macOS but > doesn't say anything about Linux. Has it been tested on Solaris? > > I'm really unsure about this code and am hoping someone more > knowledgeable in this areas can chime in. I'd be less concerned if there > was a single POSIX implementation that worked everywhere. :( Though I > have my general concern about adding yet another potential point of > failure in the error reporting logic. > > Thanks, > David > > > Best regards, Matthias > > From matthias.baesken at sap.com Mon Nov 25 09:58:24 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 25 Nov 2019 09:58:24 +0000 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 Message-ID: Hello, the test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 . exception : java.lang.Error: cores is not a directory or does not have write permissions at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) at java.base/java.lang.Thread.run(Thread.java:833) Looks like the test checks that directory /cores is writable : File coresDir = new File("/cores"); if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). So the test fails. My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8234625 http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ Best regards, Matthias From matthias.baesken at sap.com Mon Nov 25 13:44:39 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Mon, 25 Nov 2019 13:44:39 +0000 Subject: RFR: 8234741: enhance os::get_core_path on macOS Message-ID: Hello, Currently the macOS implementation of os::get_core_path just displays the default core file location. However it does not handle/show other locations set by the sysctl parameter "kern.corefile" . This is enhanced by this change . I also take care of handling %P which is used a lot for the pid-placeholder on macOS in the "kern.corefile" parameter . ( additionally the change contains a one-liner adjustment in src/java.desktop/macosx/native/libawt_lwawt/awt/AWTView.h to be able to compile again on older macOs versions) Bug / webrev : https://bugs.openjdk.java.net/browse/JDK-8234741 http://cr.openjdk.java.net/~mbaesken/webrevs/8234741.0/ Thanks, Matthias From robbin.ehn at oracle.com Mon Nov 25 16:33:03 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Mon, 25 Nov 2019 17:33:03 +0100 Subject: RFR: 8234742: Improve handshake logging Message-ID: Hi all, please review. There is little useful information in the handshaking logs. This changes the handshakes logs similar to safepoint logs, so the basic need of what handshake operation and how long it took easily can be tracked. Also the per thread log is a bit enhanced. The refactoring using HandshakeOperation instead of a ThreadClosure is not merely for this change. Other changes in the pipeline also require a more complex HandshakeOperation. Issue: https://bugs.openjdk.java.net/browse/JDK-8234742 Changeset: http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ Passes t1-3. Thanks, Robbin Examples: -Xlog:handshake,safepoint [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: 381873579 ns, Reaching safepoint: 451132 ns, At safepoint: 491202 ns, Total: 942334 ns [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted threads: 25, Executed by targeted threads: 8, Total completion time: 46884 ns [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted threads: 25, Executed by targeted threads: 10, Total completion time: 94547 ns [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted threads: 25, Executed by targeted threads: 10, Total completion time: 33545 ns [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: 4697901 ns, Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, Total: 1680859 ns [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: 25, Executed by targeted threads: 10, Total completion time: 37291 ns [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: 2201206 ns, Reaching safepoint: 295463 ns, At safepoint: 928077 ns, Total: 1223540 ns [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since last: 3161645 ns, Reaching safepoint: 206278 ns, At safepoint: 357284 ns, Total: 563562 ns [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: 1000123769 ns, Reaching safepoint: 526489 ns, At safepoint: 23345 ns, Total: 549834 ns [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted threads: 1, Executed by targeted threads: 0, Total completion time: 41322 ns -Xlog:handshake*=trace [1.259s][trace][handshake ] Threads signaled, begin processing blocked threads by VMThtread [1.259s][trace][handshake ] Processing handshake by VMThtread [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread 0x00007f2594022800, is_vm_thread: true, completed in 487 ns [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns [1.259s][trace][handshake ] Processing handshake by VMThtread [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread 0x00007f259428a800, is_vm_thread: true, completed in 462 ns [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns ... [1.259s][trace][handshake ] Processing handshake by VMThtread [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns [1.259s][trace][handshake ] Processing handshake by VMThtread [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted threads: 28, Executed by targeted threads: 4, Total completion time: 629534 ns [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From ioi.lam at oracle.com Mon Nov 25 18:44:02 2019 From: ioi.lam at oracle.com (Ioi Lam) Date: Mon, 25 Nov 2019 10:44:02 -0800 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: References: Message-ID: <89be4bb8-8545-e1fa-121a-474d23a295cd@oracle.com> Hi Matthis, Similar code also exists in test/hotspot/jtreg/compiler/ciReplay/CiReplayBase.java and test/hotspot/jtreg/serviceability/sa/TestJmapCore.java. Maybe it's a good time to pull out getCoreFileLocation into the test library and also move the if (coreFileLocation == null) { ??? if (Platform.isOSX()) { ??? ... ??? } else if (Platform.isLinux()) { ??? ... } code into getCoreFileLocation as well? See https://bugs.openjdk.java.net/browse/JDK-8233533 Thanks - Ioi On 11/25/19 1:58 AM, Baesken, Matthias wrote: > Hello, the test > serviceability/sa/ClhsdbCDSCore.java > fails on macOS 10.15 . > exception : > java.lang.Error: cores is not a directory or does not have write permissions > at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) > at java.base/java.lang.Thread.run(Thread.java:833) > > Looks like the test checks that directory /cores is writable : > File coresDir = new File("/cores"); > if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail > However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). > So the test fails. > > My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . > > > > Bug/webrev : > > https://bugs.openjdk.java.net/browse/JDK-8234625 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ > > > Best regards, Matthias From igor.ignatyev at oracle.com Mon Nov 25 20:09:30 2019 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Mon, 25 Nov 2019 12:09:30 -0800 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: References: Message-ID: Hi Matthias, your solution will hide the fact that the coverage from this test will be missed on macos 10.15+. I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. -- Igor > On Nov 25, 2019, at 1:58 AM, Baesken, Matthias wrote: > > Hello, the test > serviceability/sa/ClhsdbCDSCore.java > fails on macOS 10.15 . > exception : > java.lang.Error: cores is not a directory or does not have write permissions > at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) > at java.base/java.lang.Thread.run(Thread.java:833) > > Looks like the test checks that directory /cores is writable : > File coresDir = new File("/cores"); > if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail > However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). > So the test fails. > > My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . > > > > Bug/webrev : > > https://bugs.openjdk.java.net/browse/JDK-8234625 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ > > > Best regards, Matthias From david.holmes at oracle.com Mon Nov 25 22:33:58 2019 From: david.holmes at oracle.com (David Holmes) Date: Tue, 26 Nov 2019 08:33:58 +1000 Subject: RFR: 8234741: enhance os::get_core_path on macOS In-Reply-To: References: Message-ID: On 25/11/2019 11:44 pm, Baesken, Matthias wrote: > Hello, > > Currently the macOS implementation of os::get_core_path just displays the default core file location. > However it does not handle/show other locations set by the sysctl parameter "kern.corefile" . This is enhanced by this change . > I also take care of handling %P which is used a lot for the pid-placeholder on macOS in the "kern.corefile" parameter . > > > ( additionally the change contains a one-liner adjustment in src/java.desktop/macosx/native/libawt_lwawt/awt/AWTView.h to be able to compile again on older macOs versions) That issue is being discussed elsewhere by the client folk and should not be part of this change. David > > Bug / webrev : > > https://bugs.openjdk.java.net/browse/JDK-8234741 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234741.0/ > > Thanks, Matthias > From david.holmes at oracle.com Tue Nov 26 00:38:49 2019 From: david.holmes at oracle.com (David Holmes) Date: Tue, 26 Nov 2019 10:38:49 +1000 Subject: RFR: 8234742: Improve handshake logging In-Reply-To: References: Message-ID: <64ecd635-0217-61bd-6da2-f1c90d0084f9@oracle.com> Hi Robbin, On 26/11/2019 2:33 am, Robbin Ehn wrote: > Hi all, please review. > > There is little useful information in the handshaking logs. > This changes the handshakes logs similar to safepoint logs, so the basic > need of > what handshake operation and how long it took easily can be tracked. > Also the per thread log is a bit enhanced. > > The refactoring using HandshakeOperation instead of a ThreadClosure is not > merely for this change. Other changes in the pipeline also require a more > complex HandshakeOperation. This change seems to be predominantly about the refactoring and only touches the logging in a few places. I think this either needs to be split into two issues or the focus of the issue changed so that it is predominantly about the refactor and only incidentally improves the logging. Thanks, David ----- > Issue: > https://bugs.openjdk.java.net/browse/JDK-8234742 > Changeset: > http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ > > Passes t1-3. > > Thanks, Robbin > > Examples: > -Xlog:handshake,safepoint > > [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: > 381873579 ns, Reaching safepoint: 451132 ns, At safepoint: 491202 ns, > Total: 942334 ns > [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted > threads: 25, Executed by targeted threads: 8, Total completion time: > 46884 ns > [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted > threads: 25, Executed by targeted threads: 10, Total completion time: > 94547 ns > [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted > threads: 25, Executed by targeted threads: 10, Total completion time: > 33545 ns > [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: 4697901 > ns, Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, Total: > 1680859 ns > [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: 25, > Executed by targeted threads: 10, Total completion time: 37291 ns > [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: 2201206 > ns, Reaching safepoint: 295463 ns, At safepoint: 928077 ns, Total: > 1223540 ns > [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since last: > 3161645 ns, Reaching safepoint: 206278 ns, At safepoint: 357284 ns, > Total: 563562 ns > [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: > 1000123769 ns, Reaching safepoint: 526489 ns, At safepoint: 23345 ns, > Total: 549834 ns > [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted threads: > 1, Executed by targeted threads: 0, Total completion time: 41322 ns > > -Xlog:handshake*=trace > > [1.259s][trace][handshake ] Threads signaled, begin processing blocked > threads by VMThtread > [1.259s][trace][handshake ] Processing handshake by VMThtread > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f2594022800, is_vm_thread: true, completed in 487 ns > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns > [1.259s][trace][handshake ] Processing handshake by VMThtread > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f259428a800, is_vm_thread: true, completed in 462 ns > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns > ... > [1.259s][trace][handshake ] Processing handshake by VMThtread > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns > [1.259s][trace][handshake ] Processing handshake by VMThtread > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns > [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted threads: > 28, Executed by targeted threads: 4, Total completion time: 629534 ns > [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From matthias.baesken at sap.com Tue Nov 26 08:36:51 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Tue, 26 Nov 2019 08:36:51 +0000 Subject: RFR: 8234741: enhance os::get_core_path on macOS In-Reply-To: References: Message-ID: OK I'll do a separate change . Are you fine with the rest of the patch ? Best regards, Matthias > > On 25/11/2019 11:44 pm, Baesken, Matthias wrote: > > Hello, > > > > Currently the macOS implementation of os::get_core_path just displays > the default core file location. > > However it does not handle/show other locations set by the sysctl > parameter "kern.corefile" . This is enhanced by this change . > > I also take care of handling %P which is used a lot for the pid-placeholder > on macOS in the "kern.corefile" parameter . > > > > > > ( additionally the change contains a one-liner adjustment in > src/java.desktop/macosx/native/libawt_lwawt/awt/AWTView.h to be able > to compile again on older macOs versions) > > That issue is being discussed elsewhere by the client folk and should > not be part of this change. > > David > > > > > Bug / webrev : > > > > https://bugs.openjdk.java.net/browse/JDK-8234741 > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234741.0/ > > > > Thanks, Matthias > > From robbin.ehn at oracle.com Tue Nov 26 09:29:29 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 26 Nov 2019 10:29:29 +0100 Subject: RFR: 8234742: Improve handshake logging In-Reply-To: <64ecd635-0217-61bd-6da2-f1c90d0084f9@oracle.com> References: <64ecd635-0217-61bd-6da2-f1c90d0084f9@oracle.com> Message-ID: <0d4c8a49-3023-29dd-36b4-f05911067f69@oracle.com> Hi David, On 11/26/19 1:38 AM, David Holmes wrote: > Hi Robbin, > > On 26/11/2019 2:33 am, Robbin Ehn wrote: >> Hi all, please review. >> >> There is little useful information in the handshaking logs. >> This changes the handshakes logs similar to safepoint logs, so the basic need of >> what handshake operation and how long it took easily can be tracked. >> Also the per thread log is a bit enhanced. >> >> The refactoring using HandshakeOperation instead of a ThreadClosure is not >> merely for this change. Other changes in the pipeline also require a more >> complex HandshakeOperation. > > This change seems to be predominantly about the refactoring and only touches the > logging in a few places. I think this either needs to be split into two issues > or the focus of the issue changed so that it is predominantly about the refactor > and only incidentally improves the logging. > Sure Thanks, Robbin > Thanks, > David > ----- > >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8234742 >> Changeset: >> http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ >> >> Passes t1-3. >> >> Thanks, Robbin >> >> Examples: >> -Xlog:handshake,safepoint >> >> [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: 381873579 >> ns, Reaching safepoint: 451132 ns, At safepoint: 491202 ns, Total: 942334 ns >> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >> threads: 25, Executed by targeted threads: 8, Total completion time: 46884 ns >> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >> threads: 25, Executed by targeted threads: 10, Total completion time: 94547 ns >> [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >> threads: 25, Executed by targeted threads: 10, Total completion time: 33545 ns >> [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: 4697901 ns, >> Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, Total: 1680859 ns >> [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: 25, >> Executed by targeted threads: 10, Total completion time: 37291 ns >> [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: 2201206 ns, >> Reaching safepoint: 295463 ns, At safepoint: 928077 ns, Total: 1223540 ns >> [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since last: 3161645 >> ns, Reaching safepoint: 206278 ns, At safepoint: 357284 ns, Total: 563562 ns >> [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: 1000123769 ns, >> Reaching safepoint: 526489 ns, At safepoint: 23345 ns, Total: 549834 ns >> [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted threads: 1, >> Executed by targeted threads: 0, Total completion time: 41322 ns >> >> -Xlog:handshake*=trace >> >> [1.259s][trace][handshake ] Threads signaled, begin processing blocked threads >> by VMThtread >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f2594022800, is_vm_thread: true, completed in 487 ns >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f259428a800, is_vm_thread: true, completed in 462 ns >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns >> ... >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns >> [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted threads: 28, >> Executed by targeted threads: 4, Total completion time: 629534 ns >> [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From david.holmes at oracle.com Tue Nov 26 09:32:51 2019 From: david.holmes at oracle.com (David Holmes) Date: Tue, 26 Nov 2019 19:32:51 +1000 Subject: RFR: 8234741: enhance os::get_core_path on macOS In-Reply-To: References: Message-ID: On 26/11/2019 6:36 pm, Baesken, Matthias wrote: > OK I'll do a separate change . > > Are you fine with the rest of the patch ? Sorry not an area I'm familiar with. Hoping one of our mac fans can chime in on this one. I agree in principle but don't know the details. Probably going to be a slow week for reviews due to US Thanksgiving holidays. Cheers, David > Best regards, Matthias > > > >> >> On 25/11/2019 11:44 pm, Baesken, Matthias wrote: >>> Hello, >>> >>> Currently the macOS implementation of os::get_core_path just displays >> the default core file location. >>> However it does not handle/show other locations set by the sysctl >> parameter "kern.corefile" . This is enhanced by this change . >>> I also take care of handling %P which is used a lot for the pid-placeholder >> on macOS in the "kern.corefile" parameter . >>> >>> >>> ( additionally the change contains a one-liner adjustment in >> src/java.desktop/macosx/native/libawt_lwawt/awt/AWTView.h to be able >> to compile again on older macOs versions) >> >> That issue is being discussed elsewhere by the client folk and should >> not be part of this change. >> >> David >> >>> >>> Bug / webrev : >>> >>> https://bugs.openjdk.java.net/browse/JDK-8234741 >>> >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8234741.0/ >>> >>> Thanks, Matthias >>> From matthias.baesken at sap.com Tue Nov 26 10:07:47 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Tue, 26 Nov 2019 10:07:47 +0000 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: References: Message-ID: * or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. Hello Igor, that sounds interesting. Who would set the say env.core.accessible property accordingly (so that in our environment on 10.15 it would be false) ? Best regards, Matthias From: Igor Ignatyev Sent: Montag, 25. November 2019 21:10 To: Baesken, Matthias Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 Hi Matthias, your solution will hide the fact that the coverage from this test will be missed on macos 10.15+. I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. -- Igor On Nov 25, 2019, at 1:58 AM, Baesken, Matthias > wrote: Hello, the test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 . exception : java.lang.Error: cores is not a directory or does not have write permissions at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) at java.base/java.lang.Thread.run(Thread.java:833) Looks like the test checks that directory /cores is writable : File coresDir = new File("/cores"); if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). So the test fails. My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8234625 http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ Best regards, Matthias From stefan.karlsson at oracle.com Tue Nov 26 11:11:44 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 26 Nov 2019 12:11:44 +0100 Subject: RFR: 8234748: Clean up atomic and orderAccess includes Message-ID: Hi all, Please review this trivial, but large, patch to cleanup the includes of atomic.hpp and orderAccess.hpp. https://cr.openjdk.java.net/~stefank/8234748/webrev.01/ https://bugs.openjdk.java.net/browse/JDK-8234748 Thanks, StefanK From robbin.ehn at oracle.com Tue Nov 26 13:06:02 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 26 Nov 2019 14:06:02 +0100 Subject: RFR: 8234742: Improve handshake logging In-Reply-To: References: Message-ID: <31e700c7-d2fe-2977-e1e2-13e3cc10927d@oracle.com> Hi, Here is the logging part separately: http://cr.openjdk.java.net/~rehn/8234742/v2/full/webrev/index.html It contains one additional change from the first version: if (number_of_threads_issued < 1) { - log_debug(handshake)("No threads to handshake."); + log_handshake_info(start_time_ns, _op->name(), 0, 0, " (no threads)"); return; } Passes t1-3. So this goes on top of 8234796: http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ Thanks, Robbin On 11/25/19 5:33 PM, Robbin Ehn wrote: > Hi all, please review. > > There is little useful information in the handshaking logs. > This changes the handshakes logs similar to safepoint logs, so the basic need of > what handshake operation and how long it took easily can be tracked. > Also the per thread log is a bit enhanced. > > The refactoring using HandshakeOperation instead of a ThreadClosure is not > merely for this change. Other changes in the pipeline also require a more > complex HandshakeOperation. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8234742 > Changeset: > http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ > > Passes t1-3. > > Thanks, Robbin > > Examples: > -Xlog:handshake,safepoint > > [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: 381873579 ns, > Reaching safepoint: 451132 ns, At safepoint: 491202 ns, Total: 942334 ns > [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted threads: > 25, Executed by targeted threads: 8, Total completion time: 46884 ns > [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted threads: > 25, Executed by targeted threads: 10, Total completion time: 94547 ns > [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted threads: > 25, Executed by targeted threads: 10, Total completion time: 33545 ns > [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: 4697901 ns, > Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, Total: 1680859 ns > [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: 25, > Executed by targeted threads: 10, Total completion time: 37291 ns > [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: 2201206 ns, > Reaching safepoint: 295463 ns, At safepoint: 928077 ns, Total: 1223540 ns > [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since last: 3161645 > ns, Reaching safepoint: 206278 ns, At safepoint: 357284 ns, Total: 563562 ns > [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: 1000123769 ns, > Reaching safepoint: 526489 ns, At safepoint: 23345 ns, Total: 549834 ns > [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted threads: 1, > Executed by targeted threads: 0, Total completion time: 41322 ns > > -Xlog:handshake*=trace > > [1.259s][trace][handshake ] Threads signaled, begin processing blocked threads > by VMThtread > [1.259s][trace][handshake ] Processing handshake by VMThtread > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f2594022800, is_vm_thread: true, completed in 487 ns > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns > [1.259s][trace][handshake ] Processing handshake by VMThtread > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f259428a800, is_vm_thread: true, completed in 462 ns > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns > ... > [1.259s][trace][handshake ] Processing handshake by VMThtread > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns > [1.259s][trace][handshake ] Processing handshake by VMThtread > [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns > [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted threads: 28, > Executed by targeted threads: 4, Total completion time: 629534 ns > [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread > 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From robbin.ehn at oracle.com Tue Nov 26 13:07:41 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 26 Nov 2019 14:07:41 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation Message-ID: Hi all, please review. Issue: https://bugs.openjdk.java.net/browse/JDK-8234796 Code: http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ The handshake code needs more information about the handshake operation. We change type from ThreadClosure to HandshakeOperation in Handshake::execute. This enables us to add more details to the HandshakeOperation as needed going forward. Tested t1 and t1-3 together with the logging improvements in 8234742. It was requested that "HandshakeOperation()" would take the name instead having "virtual const char* name();". Which is in this patch. Thanks, Robbin From per.liden at oracle.com Tue Nov 26 14:20:23 2019 From: per.liden at oracle.com (Per Liden) Date: Tue, 26 Nov 2019 15:20:23 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: References: Message-ID: <4afc7480-bb71-0dec-1218-a2c7e74930f1@oracle.com> Hi Robbin, On 11/26/19 2:07 PM, Robbin Ehn wrote: > Hi all, please review. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8234796 > Code: > http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ > > The handshake code needs more information about the handshake operation. > We change type from ThreadClosure to HandshakeOperation in > Handshake::execute. > This enables us to add more details to the HandshakeOperation as needed > going forward. I kind of think HandshakeOperation is exposing a too wide API here. I'm thinking the following things doesn't quite belong in there, but is more Handshake internal stuff. void do_handshake(JavaThread* thread); bool thread_has_completed() { return _done.trywait(); } bool executed() const { return _executed; } #ifdef ASSERT void check_state() { assert(!_done.trywait(), "Must be zero"); } #endif How about you just expose a closure that inherits from ThreadClosure, but also takes a name? Like this: class HandshakeClosure : public ThreadClosure { private: const char* const _name; public: HandshakeClosure(const char* name) : _name(name) {} const char* name() const { return _name; } virtual void do_thread(Thread* thread) = 0; }; That way we expose a narrower API, and it also helps avoid the need for HandshakeOperation -> ThreadClosure wrappers. The other stuff can stay internal in a wrapping HandshakeThreadOperation like it did before. What do you think? /Per > > Tested t1 and t1-3 together with the logging improvements in 8234742. > > It was requested that "HandshakeOperation()" would take the name instead > having "virtual const char* name();". Which is in this patch. > > Thanks, Robbin From per.liden at oracle.com Tue Nov 26 14:24:02 2019 From: per.liden at oracle.com (Per Liden) Date: Tue, 26 Nov 2019 15:24:02 +0100 Subject: RFR: 8234742: Improve handshake logging In-Reply-To: <31e700c7-d2fe-2977-e1e2-13e3cc10927d@oracle.com> References: <31e700c7-d2fe-2977-e1e2-13e3cc10927d@oracle.com> Message-ID: <8e85888a-8009-addd-32ca-1dfc1c456c1c@oracle.com> Hi, I just realized that my comment on "8234796: Refactor Handshake::execute to take a HandshakeOperation" is probably more relevant for this change, so cut-n-pasting it here: I kind of think HandshakeOperation is exposing a too wide API here. I'm thinking the following things doesn't quite belong in there, but is more Handshake internal stuff. void do_handshake(JavaThread* thread); bool thread_has_completed() { return _done.trywait(); } bool executed() const { return _executed; } #ifdef ASSERT void check_state() { assert(!_done.trywait(), "Must be zero"); } #endif How about you just expose a closure that inherits from ThreadClosure, but also takes a name? Like this: class HandshakeClosure : public ThreadClosure { private: const char* const _name; public: HandshakeClosure(const char* name) : _name(name) {} const char* name() const { return _name; } virtual void do_thread(Thread* thread) = 0; }; That way we expose a narrower API, and it also helps avoid the need for HandshakeOperation -> ThreadClosure wrappers. The other stuff can stay internal in a wrapping HandshakeThreadOperation like it did before. What do you think? /Per On 11/26/19 2:06 PM, Robbin Ehn wrote: > Hi, > > Here is the logging part separately: > http://cr.openjdk.java.net/~rehn/8234742/v2/full/webrev/index.html > > It contains one additional change from the first version: > ???? if (number_of_threads_issued < 1) { > -????? log_debug(handshake)("No threads to handshake."); > +????? log_handshake_info(start_time_ns, _op->name(), 0, 0, " (no > threads)"); > ?????? return; > ???? } > > Passes t1-3. > > So this goes on top of 8234796: > http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ > > Thanks, Robbin > > On 11/25/19 5:33 PM, Robbin Ehn wrote: >> Hi all, please review. >> >> There is little useful information in the handshaking logs. >> This changes the handshakes logs similar to safepoint logs, so the >> basic need of >> what handshake operation and how long it took easily can be tracked. >> Also the per thread log is a bit enhanced. >> >> The refactoring using HandshakeOperation instead of a ThreadClosure is >> not >> merely for this change. Other changes in the pipeline also require a more >> complex HandshakeOperation. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8234742 >> Changeset: >> http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ >> >> Passes t1-3. >> >> Thanks, Robbin >> >> Examples: >> -Xlog:handshake,safepoint >> >> [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: >> 381873579 ns, Reaching safepoint: 451132 ns, At safepoint: 491202 ns, >> Total: 942334 ns >> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", >> Targeted threads: 25, Executed by targeted threads: 8, Total >> completion time: 46884 ns >> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", >> Targeted threads: 25, Executed by targeted threads: 10, Total >> completion time: 94547 ns >> [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", >> Targeted threads: 25, Executed by targeted threads: 10, Total >> completion time: 33545 ns >> [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: >> 4697901 ns, Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, >> Total: 1680859 ns >> [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: >> 25, Executed by targeted threads: 10, Total completion time: 37291 ns >> [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: >> 2201206 ns, Reaching safepoint: 295463 ns, At safepoint: 928077 ns, >> Total: 1223540 ns >> [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since last: >> 3161645 ns, Reaching safepoint: 206278 ns, At safepoint: 357284 ns, >> Total: 563562 ns >> [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: >> 1000123769 ns, Reaching safepoint: 526489 ns, At safepoint: 23345 ns, >> Total: 549834 ns >> [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted threads: >> 1, Executed by targeted threads: 0, Total completion time: 41322 ns >> >> -Xlog:handshake*=trace >> >> [1.259s][trace][handshake ] Threads signaled, begin processing blocked >> threads by VMThtread >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f2594022800, is_vm_thread: true, completed in 487 ns >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f259428a800, is_vm_thread: true, completed in 462 ns >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns >> ... >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns >> [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted threads: >> 28, Executed by targeted threads: 4, Total completion time: 629534 ns >> [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From gerard.ziemski at oracle.com Tue Nov 26 16:17:34 2019 From: gerard.ziemski at oracle.com (gerard ziemski) Date: Tue, 26 Nov 2019 10:17:34 -0600 Subject: RFR: 8234741: enhance os::get_core_path on macOS In-Reply-To: References: Message-ID: <495770f4-122c-436c-6606-9f9dd79d1e90@oracle.com> hi Matthias, I had to look up "kern.corefile" option, of which I was not previously aware - it looks like a nice enhancement. I'd like to suggest a few small cleanups though: #1 add "os::" prefix to "current_process_id()" call #2 restrict the scope of "os::current_process_id()" to only the branch of "if" that needs it #3 expand on the "tail" comment a bit to explain why it might be needed Perhaps something like this: ? char coreinfo[MAX_PATH]; ?? size_t sz = sizeof(coreinfo); ?? int ret = sysctlbyname("kern.corefile", coreinfo, &sz, NULL, 0); ?? if (ret == 0) { ???? char *pid_pos = strstr(coreinfo, "%P"); ???? const char* tail = (pid_pos != NULL) ? (pid_pos + 2) : ""; // skip over the "%P" to preserve any optional custom user pattern (i.e. %N, %U) ???? if (pid_pos != NULL) { ?????? *pid_pos = '\0'; ?????? n = jio_snprintf(buffer, bufferSize, "%s%d%s", coreinfo, os::current_process_id(), tail); ???? } else { ?????? n = jio_snprintf(buffer, bufferSize, "%s", coreinfo); ???? } ?? } BTW. I'm glad you agree to remove the unrelated AWT change from this fix and let the client team handle it. cheers On 11/26/19 2:36 AM, Baesken, Matthias wrote: > OK I'll do a separate change . > > Are you fine with the rest of the patch ? > > Best regards, Matthias > > > >> On 25/11/2019 11:44 pm, Baesken, Matthias wrote: >>> Hello, >>> >>> Currently the macOS implementation of os::get_core_path just displays >> the default core file location. >>> However it does not handle/show other locations set by the sysctl >> parameter "kern.corefile" . This is enhanced by this change . >>> I also take care of handling %P which is used a lot for the pid-placeholder >> on macOS in the "kern.corefile" parameter . >>> >>> ( additionally the change contains a one-liner adjustment in >> src/java.desktop/macosx/native/libawt_lwawt/awt/AWTView.h to be able >> to compile again on older macOs versions) >> >> That issue is being discussed elsewhere by the client folk and should >> not be part of this change. >> >> David >> >>> Bug / webrev : >>> >>> https://bugs.openjdk.java.net/browse/JDK-8234741 >>> >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8234741.0/ >>> >>> Thanks, Matthias >>> From robbin.ehn at oracle.com Tue Nov 26 16:55:23 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 26 Nov 2019 17:55:23 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: <4afc7480-bb71-0dec-1218-a2c7e74930f1@oracle.com> References: <4afc7480-bb71-0dec-1218-a2c7e74930f1@oracle.com> Message-ID: <6cd499a9-bb5c-253d-ecff-9b8aeeb86082@oracle.com> Hi Per, thanks for having a look. To support multiple handshake in-flight and some features e.g. suspend/resume, more things needs to customized by the handshake operation. And to let the execution code stay agnostic about how to execute a specific handshake, the execution model should be supplied in the handshake operation. This leads to HandshakeThreadOperation needs to be customized, since this is the installed operation. Also we now need to figure out the memory life-cycle of HandshakeThreadOperation which must be the same as HandshakeClosure, if we would add asynch handshakes. Secondly I hate this method: virtual void do_thread(Thread* thread) = 0; I never ever want Thread*, I want JavaThread* :) On 2019-11-26 15:20, Per Liden wrote: > What do you think? With that said, I'm totally fine doing your suggestion instead! But I must rename the issue also :) I think this was the correct issue to respond to, update coming. Thanks, Robbin > > /Per > >> >> Tested t1 and t1-3 together with the logging improvements in 8234742. >> >> It was requested that "HandshakeOperation()" would take the name instead >> having "virtual const char* name();". Which is in this patch. >> >> Thanks, Robbin From robbin.ehn at oracle.com Tue Nov 26 16:56:51 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Tue, 26 Nov 2019 17:56:51 +0100 Subject: RFR: 8234742: Improve handshake logging In-Reply-To: <8e85888a-8009-addd-32ca-1dfc1c456c1c@oracle.com> References: <31e700c7-d2fe-2977-e1e2-13e3cc10927d@oracle.com> <8e85888a-8009-addd-32ca-1dfc1c456c1c@oracle.com> Message-ID: <1eb51c8a-57d1-9db0-bf78-965eec8efb1b@oracle.com> Hi Per, On 2019-11-26 15:24, Per Liden wrote: > Hi, > > I just realized that my comment on "8234796: Refactor Handshake::execute to take > a HandshakeOperation" is probably more relevant for this change, so > cut-n-pasting it here: I think the other issue was correct. Thanks, Robbin > > > I kind of think HandshakeOperation is exposing a too wide API here. I'm thinking > the following things doesn't quite belong in there, but is more Handshake > internal stuff. > > ? void do_handshake(JavaThread* thread); > ? bool thread_has_completed() { return _done.trywait(); } > ? bool executed() const { return _executed; } > > #ifdef ASSERT > ? void check_state() { > ??? assert(!_done.trywait(), "Must be zero"); > ? } > #endif > > How about you just expose a closure that inherits from ThreadClosure, but also > takes a name? Like this: > > class HandshakeClosure : public ThreadClosure { > private: > ? const char* const _name; > > public: > ? HandshakeClosure(const char* name) : > ????? _name(name) {} > > ? const char* name() const { > ??? return _name; > ? } > > ? virtual void do_thread(Thread* thread) = 0; > }; > > That way we expose a narrower API, and it also helps avoid the need for > HandshakeOperation -> ThreadClosure wrappers. The other stuff can stay internal > in a wrapping HandshakeThreadOperation like it did before. > > What do you think? > > /Per > > > On 11/26/19 2:06 PM, Robbin Ehn wrote: >> Hi, >> >> Here is the logging part separately: >> http://cr.openjdk.java.net/~rehn/8234742/v2/full/webrev/index.html >> >> It contains one additional change from the first version: >> ????? if (number_of_threads_issued < 1) { >> -????? log_debug(handshake)("No threads to handshake."); >> +????? log_handshake_info(start_time_ns, _op->name(), 0, 0, " (no threads)"); >> ??????? return; >> ????? } >> >> Passes t1-3. >> >> So this goes on top of 8234796: >> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >> >> Thanks, Robbin >> >> On 11/25/19 5:33 PM, Robbin Ehn wrote: >>> Hi all, please review. >>> >>> There is little useful information in the handshaking logs. >>> This changes the handshakes logs similar to safepoint logs, so the basic need of >>> what handshake operation and how long it took easily can be tracked. >>> Also the per thread log is a bit enhanced. >>> >>> The refactoring using HandshakeOperation instead of a ThreadClosure is not >>> merely for this change. Other changes in the pipeline also require a more >>> complex HandshakeOperation. >>> >>> Issue: >>> https://bugs.openjdk.java.net/browse/JDK-8234742 >>> Changeset: >>> http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ >>> >>> Passes t1-3. >>> >>> Thanks, Robbin >>> >>> Examples: >>> -Xlog:handshake,safepoint >>> >>> [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: 381873579 >>> ns, Reaching safepoint: 451132 ns, At safepoint: 491202 ns, Total: 942334 ns >>> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >>> threads: 25, Executed by targeted threads: 8, Total completion time: 46884 ns >>> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >>> threads: 25, Executed by targeted threads: 10, Total completion time: 94547 ns >>> [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >>> threads: 25, Executed by targeted threads: 10, Total completion time: 33545 ns >>> [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: 4697901 ns, >>> Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, Total: 1680859 ns >>> [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: 25, >>> Executed by targeted threads: 10, Total completion time: 37291 ns >>> [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: 2201206 ns, >>> Reaching safepoint: 295463 ns, At safepoint: 928077 ns, Total: 1223540 ns >>> [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since last: >>> 3161645 ns, Reaching safepoint: 206278 ns, At safepoint: 357284 ns, Total: >>> 563562 ns >>> [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: 1000123769 >>> ns, Reaching safepoint: 526489 ns, At safepoint: 23345 ns, Total: 549834 ns >>> [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted threads: 1, >>> Executed by targeted threads: 0, Total completion time: 41322 ns >>> >>> -Xlog:handshake*=trace >>> >>> [1.259s][trace][handshake ] Threads signaled, begin processing blocked >>> threads by VMThtread >>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f2594022800, is_vm_thread: true, completed in 487 ns >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns >>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f259428a800, is_vm_thread: true, completed in 462 ns >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns >>> ... >>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns >>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns >>> [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted threads: 28, >>> Executed by targeted threads: 4, Total completion time: 629534 ns >>> [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From igor.ignatyev at oracle.com Tue Nov 26 17:34:58 2019 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Tue, 26 Nov 2019 09:34:58 -0800 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: References: Message-ID: <5FEB52F4-2AA3-4ECD-A05E-5A2504F49984@oracle.com> for now, we have only test/jtreg-ext/requires/VMProps.java which sets @requires properties; although 'env.core.accessible' and similar properties aren't vm-related properties per-se so VMProps isn't the best place to set them. however VMProps already sets a few non vm properties, and given introduction of another class to set these properties is a bit of hassle, I think it's fine to just add `env.core.accessible` to VMProps (and later rename VMProps to be more appropriate for all kinds of properties). there is one thing which you should be aware than add any new requires properties, jtreg runs VMProps once for *every* execution even if none of "target" tests use @requires, in other words, all test executions pay price of setting all properties, therefore it should be costly. -- Igor > On Nov 26, 2019, at 2:07 AM, Baesken, Matthias wrote: > > > or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. > > Hello Igor, that sounds interesting. > > Who would set the say env.core.accessible property accordingly (so that in our environment on 10.15 it would be false) ? > > Best regards, Matthias > > > From: Igor Ignatyev > > Sent: Montag, 25. November 2019 21:10 > To: Baesken, Matthias > > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 > > Hi Matthias, > > your solution will hide the fact that the coverage from this test will be missed on macos 10.15+. I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. > > -- Igor > > > On Nov 25, 2019, at 1:58 AM, Baesken, Matthias > wrote: > > Hello, the test > serviceability/sa/ClhsdbCDSCore.java > fails on macOS 10.15 . > exception : > java.lang.Error: cores is not a directory or does not have write permissions > at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) > at java.base/java.lang.Thread.run(Thread.java:833) > > Looks like the test checks that directory /cores is writable : > File coresDir = new File("/cores"); > if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail > However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). > So the test fails. > > My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . > > > > Bug/webrev : > > https://bugs.openjdk.java.net/browse/JDK-8234625 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ > > > Best regards, Matthias From kim.barrett at oracle.com Wed Nov 27 00:18:07 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 26 Nov 2019 19:18:07 -0500 Subject: RFR: 8234779: Provide idiom for declaring classes noncopyable Message-ID: Please review this change that adds a new macro NONCOPYABLE, for declaring a class to be noncopyable. This change also modifies a bunch of classes to use the new macro. Most of those classes already included equivalent code, and we're just replacing that with uses of the macro. (A few classes in PtrQueue.hpp that weren't previously made noncopyable but should have been are now also using the NONCOPYABLE macro.) CR: https://bugs.openjdk.java.net/browse/JDK-8234779 Webrev: https://cr.openjdk.java.net/~kbarrett/8234779/open.00/ Testing: mach5 tier1 linux fastdebug builds for the following: 1. platforms: aarch64, arm32, ppc64le, s390x, zero. 2. x86_64 with shenandoah included. 3. x86_64 minimal configuration. From kim.barrett at oracle.com Wed Nov 27 00:39:49 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 26 Nov 2019 19:39:49 -0500 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> Message-ID: <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> > On Jun 11, 2019, at 12:42 PM, Kim Barrett wrote: > >> On Jun 10, 2019, at 9:14 PM, Kim Barrett wrote: >> new webrevs: >> full: http://cr.openjdk.java.net/~kbarrett/8213415/open.01/ >> incr: http://cr.openjdk.java.net/~kbarrett/8213415/open.01.inc/ > > Stefan and I have been talking about this offline. We have some ideas for further changes in > a slightly different direction, so no point in anyone else reviewing the open.01 changes right now > (or maybe ever). Finally returning to this. Stefan Karlsson and Thomas Shatzl had some offline feedback on earlier versions that led to some rethinking and rework. This included an attempt to be a little more consistent with nomenclature. There are still some lingering naming issues, which might be worth fixing some other time. The basic approach hasn't changed though. From the original RFR: Constructing a BitMap now ensures the size is such that rounding it up to a word boundary won't overflow. This is the new max_size_in_bits() value. This lets us add some asserts and otherwise tidy things up in some places by making use of that information. This engendered some changes to ParallelGC's ParMarkBitMap. It no longer uses the obsolete BitMap::word_align_up, instead having its own internal helper for aligning range ends that knows about invariants in ParMarkBitMap. CR: https://bugs.openjdk.java.net/browse/JDK-8213415 New webrev: http://cr.openjdk.java.net/~kbarrett/8213415/open.03/ (No incremental webrev; it wouldn't help that much for BitMap changes, and there have been several intervening months since the last one.) Testing: mach5 tier1-5 From david.holmes at oracle.com Wed Nov 27 01:18:37 2019 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 Nov 2019 11:18:37 +1000 Subject: RFR: 8234779: Provide idiom for declaring classes noncopyable In-Reply-To: References: Message-ID: <4a36701a-758d-e127-a1c4-32b2d31267ce@oracle.com> Hi Kim, That all seems fine to me. Now we just have to remember that this macro exists. :) Thanks, David On 27/11/2019 10:18 am, Kim Barrett wrote: > Please review this change that adds a new macro NONCOPYABLE, for > declaring a class to be noncopyable. This change also modifies a > bunch of classes to use the new macro. Most of those classes already > included equivalent code, and we're just replacing that with uses of > the macro. > > (A few classes in PtrQueue.hpp that weren't previously made > noncopyable but should have been are now also using the NONCOPYABLE > macro.) > > CR: > https://bugs.openjdk.java.net/browse/JDK-8234779 > > Webrev: > https://cr.openjdk.java.net/~kbarrett/8234779/open.00/ > > Testing: > mach5 tier1 > > linux fastdebug builds for the following: > 1. platforms: aarch64, arm32, ppc64le, s390x, zero. > 2. x86_64 with shenandoah included. > 3. x86_64 minimal configuration. > From david.holmes at oracle.com Wed Nov 27 03:25:41 2019 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 Nov 2019 13:25:41 +1000 Subject: RFR: 8234742: Improve handshake logging In-Reply-To: <31e700c7-d2fe-2977-e1e2-13e3cc10927d@oracle.com> References: <31e700c7-d2fe-2977-e1e2-13e3cc10927d@oracle.com> Message-ID: Hi Robbin, On 26/11/2019 11:06 pm, Robbin Ehn wrote: > Hi, > > Here is the logging part separately: > http://cr.openjdk.java.net/~rehn/8234742/v2/full/webrev/index.html > > It contains one additional change from the first version: > ???? if (number_of_threads_issued < 1) { > -????? log_debug(handshake)("No threads to handshake."); > +????? log_handshake_info(start_time_ns, _op->name(), 0, 0, " (no > threads)"); > ?????? return; > ???? } > > Passes t1-3. > > So this goes on top of 8234796: > http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ src/hotspot/share/runtime/handshake.hpp I was surprised that "process_by_vmthread" doesn't really mean that. Perhaps rename to try_process_by_vmThread? --- src/hotspot/share/runtime/handshake.cpp 91 log_info(handshake)("Handshake \"%s\", Targeted threads: %d, Executed by targeted threads: %d, Total completion time: " JLONG_FORMAT " ns%s", Probably better to end with " ns, %s" so you don't have to remember to start the 'extra' string with a space each time. 168 log_trace(handshake)("Threads signaled, begin processing blocked threads by VMThtread") Existing typo: VMThtread Otherwise seems okay. Thanks, David ----- > Thanks, Robbin > > On 11/25/19 5:33 PM, Robbin Ehn wrote: >> Hi all, please review. >> >> There is little useful information in the handshaking logs. >> This changes the handshakes logs similar to safepoint logs, so the >> basic need of >> what handshake operation and how long it took easily can be tracked. >> Also the per thread log is a bit enhanced. >> >> The refactoring using HandshakeOperation instead of a ThreadClosure is >> not >> merely for this change. Other changes in the pipeline also require a more >> complex HandshakeOperation. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8234742 >> Changeset: >> http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ >> >> Passes t1-3. >> >> Thanks, Robbin >> >> Examples: >> -Xlog:handshake,safepoint >> >> [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: >> 381873579 ns, Reaching safepoint: 451132 ns, At safepoint: 491202 ns, >> Total: 942334 ns >> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", >> Targeted threads: 25, Executed by targeted threads: 8, Total >> completion time: 46884 ns >> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", >> Targeted threads: 25, Executed by targeted threads: 10, Total >> completion time: 94547 ns >> [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", >> Targeted threads: 25, Executed by targeted threads: 10, Total >> completion time: 33545 ns >> [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: >> 4697901 ns, Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, >> Total: 1680859 ns >> [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: >> 25, Executed by targeted threads: 10, Total completion time: 37291 ns >> [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: >> 2201206 ns, Reaching safepoint: 295463 ns, At safepoint: 928077 ns, >> Total: 1223540 ns >> [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since last: >> 3161645 ns, Reaching safepoint: 206278 ns, At safepoint: 357284 ns, >> Total: 563562 ns >> [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: >> 1000123769 ns, Reaching safepoint: 526489 ns, At safepoint: 23345 ns, >> Total: 549834 ns >> [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted threads: >> 1, Executed by targeted threads: 0, Total completion time: 41322 ns >> >> -Xlog:handshake*=trace >> >> [1.259s][trace][handshake ] Threads signaled, begin processing blocked >> threads by VMThtread >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f2594022800, is_vm_thread: true, completed in 487 ns >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f259428a800, is_vm_thread: true, completed in 462 ns >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns >> ... >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns >> [1.259s][trace][handshake ] Processing handshake by VMThtread >> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns >> [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted threads: >> 28, Executed by targeted threads: 4, Total completion time: 629534 ns >> [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread >> 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From david.holmes at oracle.com Wed Nov 27 03:52:41 2019 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 Nov 2019 13:52:41 +1000 Subject: RFR: 8234748: Clean up atomic and orderAccess includes In-Reply-To: References: Message-ID: <63ce7ffa-05f8-7698-6bd7-50e934b19d93@oracle.com> Hi Stefan, On 26/11/2019 9:11 pm, Stefan Karlsson wrote: > Hi all, > > Please review this trivial, but large, patch to cleanup the includes of > atomic.hpp and orderAccess.hpp. > > https://cr.openjdk.java.net/~stefank/8234748/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8234748 That all seems fine. Thanks for doing that very tedious cleanup! David ----- > Thanks, > StefanK From stefan.karlsson at oracle.com Wed Nov 27 08:34:41 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 27 Nov 2019 09:34:41 +0100 Subject: RFR: 8234779: Provide idiom for declaring classes noncopyable In-Reply-To: References: Message-ID: <1c34ddb3-02a9-4e5e-406e-24ff39d52353@oracle.com> Hi Kim, On 2019-11-27 01:18, Kim Barrett wrote: > Please review this change that adds a new macro NONCOPYABLE, for > declaring a class to be noncopyable. This change also modifies a > bunch of classes to use the new macro. Most of those classes already > included equivalent code, and we're just replacing that with uses of > the macro. Did you consider doing something like this instead of using this macro? diff --git a/src/hotspot/share/runtime/semaphore.hpp b/src/hotspot/share/runtime/semaphore.hpp --- a/src/hotspot/share/runtime/semaphore.hpp +++ b/src/hotspot/share/runtime/semaphore.hpp @@ -44,8 +44,7 @@ SemaphoreImpl _impl; // Prevent copying and assignment of Semaphore instances. - Semaphore(const Semaphore&); - Semaphore& operator=(const Semaphore&); + NonCopyable _copy_poison; public: Semaphore(uint value = 0) : _impl(value) {} @@ -60,4 +59,10 @@ void wait_with_safepoint_check(JavaThread* thread); }; +void test_copy_semaphore() { + Semaphore s; + Semaphore sc1(s); + Semaphore sc2; sc2 = s; +} + #endif // SHARE_RUNTIME_SEMAPHORE_HPP diff --git a/src/hotspot/share/utilities/globalDefinitions.hpp b/src/hotspot/share/utilities/globalDefinitions.hpp --- a/src/hotspot/share/utilities/globalDefinitions.hpp +++ b/src/hotspot/share/utilities/globalDefinitions.hpp @@ -1193,5 +1193,13 @@ return k0 == k1; } +class NonCopyable { +private: + NonCopyable(NonCopyable const&); + NonCopyable& operator=(NonCopyable const&); + +public: + NonCopyable() {} +}; #endif // SHARE_UTILITIES_GLOBALDEFINITIONS_HPP Thanks, StefanK > > (A few classes in PtrQueue.hpp that weren't previously made > noncopyable but should have been are now also using the NONCOPYABLE > macro.) > > CR: > https://bugs.openjdk.java.net/browse/JDK-8234779 > > Webrev: > https://cr.openjdk.java.net/~kbarrett/8234779/open.00/ > > Testing: > mach5 tier1 > > linux fastdebug builds for the following: > 1. platforms: aarch64, arm32, ppc64le, s390x, zero. > 2. x86_64 with shenandoah included. > 3. x86_64 minimal configuration. > From stefan.karlsson at oracle.com Wed Nov 27 08:35:14 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 27 Nov 2019 09:35:14 +0100 Subject: RFR: 8234748: Clean up atomic and orderAccess includes In-Reply-To: <63ce7ffa-05f8-7698-6bd7-50e934b19d93@oracle.com> References: <63ce7ffa-05f8-7698-6bd7-50e934b19d93@oracle.com> Message-ID: <285d8e6a-e78d-f85f-196e-8e566fa7b530@oracle.com> Thanks for reviewing! StefanK On 2019-11-27 04:52, David Holmes wrote: > Hi Stefan, > > On 26/11/2019 9:11 pm, Stefan Karlsson wrote: >> Hi all, >> >> Please review this trivial, but large, patch to cleanup the includes >> of atomic.hpp and orderAccess.hpp. >> >> https://cr.openjdk.java.net/~stefank/8234748/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8234748 > > That all seems fine. > > Thanks for doing that very tedious cleanup! > > David > ----- > >> Thanks, >> StefanK From per.liden at oracle.com Wed Nov 27 08:43:10 2019 From: per.liden at oracle.com (Per Liden) Date: Wed, 27 Nov 2019 09:43:10 +0100 Subject: RFR: 8234779: Provide idiom for declaring classes noncopyable In-Reply-To: References: Message-ID: <233a779d-ae3b-4856-9c44-dd81bfceab6e@oracle.com> Please don't add this :( I don't think this adds any value, it add another ugly macros I know I will never want to use. I'd much prefer to read real C++ instead of some macro that hides what's going on. Sorry, /Per On 11/27/19 1:18 AM, Kim Barrett wrote: > Please review this change that adds a new macro NONCOPYABLE, for > declaring a class to be noncopyable. This change also modifies a > bunch of classes to use the new macro. Most of those classes already > included equivalent code, and we're just replacing that with uses of > the macro. > > (A few classes in PtrQueue.hpp that weren't previously made > noncopyable but should have been are now also using the NONCOPYABLE > macro.) > > CR: > https://bugs.openjdk.java.net/browse/JDK-8234779 > > Webrev: > https://cr.openjdk.java.net/~kbarrett/8234779/open.00/ > > Testing: > mach5 tier1 > > linux fastdebug builds for the following: > 1. platforms: aarch64, arm32, ppc64le, s390x, zero. > 2. x86_64 with shenandoah included. > 3. x86_64 minimal configuration. > From matthias.baesken at sap.com Wed Nov 27 10:08:52 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Wed, 27 Nov 2019 10:08:52 +0000 Subject: RFR: 8234741: enhance os::get_core_path on macOS In-Reply-To: <495770f4-122c-436c-6606-9f9dd79d1e90@oracle.com> References: <495770f4-122c-436c-6606-9f9dd79d1e90@oracle.com> Message-ID: Hi Gerard, thanks for your input . New webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8234741.1/ Best regards, Matthias > > hi Matthias, > > I had to look up "kern.corefile" option, of which I was not previously > aware - it looks like a nice enhancement. > > I'd like to suggest a few small cleanups though: > > #1 add "os::" prefix to "current_process_id()" call > #2 restrict the scope of "os::current_process_id()" to only the branch > of "if" that needs it > #3 expand on the "tail" comment a bit to explain why it might be needed > > Perhaps something like this: > > ? char coreinfo[MAX_PATH]; > ?? size_t sz = sizeof(coreinfo); > ?? int ret = sysctlbyname("kern.corefile", coreinfo, &sz, NULL, 0); > ?? if (ret == 0) { > ???? char *pid_pos = strstr(coreinfo, "%P"); > ???? const char* tail = (pid_pos != NULL) ? (pid_pos + 2) : ""; // skip > over the "%P" to preserve any optional custom user pattern (i.e. %N, %U) > ???? if (pid_pos != NULL) { > ?????? *pid_pos = '\0'; > ?????? n = jio_snprintf(buffer, bufferSize, "%s%d%s", coreinfo, > os::current_process_id(), tail); > ???? } else { > ?????? n = jio_snprintf(buffer, bufferSize, "%s", coreinfo); > ???? } > ?? } > > BTW. I'm glad you agree to remove the unrelated AWT change from this fix > and let the client team handle it. > > From thomas.schatzl at oracle.com Wed Nov 27 10:20:18 2019 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 27 Nov 2019 11:20:18 +0100 Subject: RFR: 8234748: Clean up atomic and orderAccess includes In-Reply-To: References: Message-ID: <9cf98bb9-3862-cfb0-2a4b-3dd35cc250b9@oracle.com> Hi Stefan, thanks for tackling this. On 26.11.19 12:11, Stefan Karlsson wrote: > Hi all, > > Please review this trivial, but large, patch to cleanup the includes of > atomic.hpp and orderAccess.hpp. > > https://cr.openjdk.java.net/~stefank/8234748/webrev.01/ > https://bugs.openjdk.java.net/browse/JDK-8234748 > > Thanks, > StefanK I did the following bash-fu to find missing/superfluous includes: grep "Atomic::" `find . -name '*.?pp'` | sed 's/\(.*pp\):.*/\1/' | uniq | sort > users.txt $ grep "atomic.hpp" `find . -name '*.?pp'` | sed 's/\(.*\):.*/\1/' | uniq | sort > includers.txt $ diff users.txt includers.txt > diff.txt diff.txt then contained the differences, with some false positives, all about containing the keywords in comments. Otherwise I think this should be complete. The same has been done with "OrderAccess::" and "orderAccess.hpp". Here are the results: Improvements to orderAccess.hpp includes: - gc/shenandoah/shenandoahVerifier.cpp misses it - the #include from gc/z/zLiveMap.inline.hpp should maybe be moved to zLiveMap.cpp Improvements to atomic.hpp includes: In the following files the include of atomic.hpp should be removed as it seems unnecessary: - share/utilities/bitMap.hpp - share/oops/oop.hpp - share/gc/z/zNMethodTable.cpp - share/gc/shenandoah/shenandoahForwarding.inline.hpp - share/gc/g1/g1CardTable.cpp - os_cpu/solaris_x86/os_solaris_x86.cpp The following need a #include atomic.hpp: - share/oops/methodData.cpp - share/oops/constantPool.cpp - share/memory/metaspace/virtualSpaceNdoe.cpp - os/posix/os_posix.cpp Looks good otherwise. Thanks, Thomas From stefan.karlsson at oracle.com Wed Nov 27 10:34:23 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 27 Nov 2019 11:34:23 +0100 Subject: RFR: 8234748: Clean up atomic and orderAccess includes In-Reply-To: <9cf98bb9-3862-cfb0-2a4b-3dd35cc250b9@oracle.com> References: <9cf98bb9-3862-cfb0-2a4b-3dd35cc250b9@oracle.com> Message-ID: Hi Thomas, On 2019-11-27 11:20, Thomas Schatzl wrote: > Hi Stefan, > > ? thanks for tackling this. > > On 26.11.19 12:11, Stefan Karlsson wrote: >> Hi all, >> >> Please review this trivial, but large, patch to cleanup the includes >> of atomic.hpp and orderAccess.hpp. >> >> https://cr.openjdk.java.net/~stefank/8234748/webrev.01/ >> https://bugs.openjdk.java.net/browse/JDK-8234748 >> >> Thanks, >> StefanK > > I did the following bash-fu to find missing/superfluous includes: > > ?grep "Atomic::" `find . -name '*.?pp'` | sed 's/\(.*pp\):.*/\1/' | > uniq | sort > users.txt > $ grep "atomic.hpp" `find . -name '*.?pp'` | sed 's/\(.*\):.*/\1/' | > uniq | sort > includers.txt > $ diff users.txt includers.txt > diff.txt > > diff.txt then contained the differences, with some false positives, all > about containing the keywords in comments. Otherwise I think this should > be complete. > > The same has been done with "OrderAccess::" and "orderAccess.hpp". > > Here are the results: > > Improvements to orderAccess.hpp includes: > > - gc/shenandoah/shenandoahVerifier.cpp misses it > > - the #include from gc/z/zLiveMap.inline.hpp should maybe be moved to > zLiveMap.cpp > > Improvements to atomic.hpp includes: > > In the following files the include of atomic.hpp should be removed as it > seems unnecessary: > - share/utilities/bitMap.hpp > - share/oops/oop.hpp These needs atomic.hpp since they use atomic_memory_order. > - share/gc/z/zNMethodTable.cpp > - share/gc/shenandoah/shenandoahForwarding.inline.hpp > - share/gc/g1/g1CardTable.cpp > - os_cpu/solaris_x86/os_solaris_x86.cpp > > The following need a #include atomic.hpp: > - share/oops/methodData.cpp > - share/oops/constantPool.cpp > - share/memory/metaspace/virtualSpaceNdoe.cpp > - os/posix/os_posix.cpp > > Looks good otherwise. Since, I already pushed the original bug, I created a new with your changes: https://cr.openjdk.java.net/~stefank/8234897/webrev.01/ I intend to take this through our build steps, and push it if it's succeeds. Thanks, StefanK > > Thanks, > ? Thomas From thomas.schatzl at oracle.com Wed Nov 27 10:39:58 2019 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 27 Nov 2019 11:39:58 +0100 Subject: RFR: 8234748: Clean up atomic and orderAccess includes In-Reply-To: References: <9cf98bb9-3862-cfb0-2a4b-3dd35cc250b9@oracle.com> Message-ID: <9b162680-5adb-2096-eef7-e16604c882db@oracle.com> Hi, On 27.11.19 11:34, Stefan Karlsson wrote: > Hi Thomas, > > On 2019-11-27 11:20, Thomas Schatzl wrote: >> Hi Stefan, >> >> ?? thanks for tackling this. >> >> On 26.11.19 12:11, Stefan Karlsson wrote: >>> Hi all, >>>[...] >> Here are the results: >> >> Improvements to orderAccess.hpp includes: >> >> - gc/shenandoah/shenandoahVerifier.cpp misses it >> >> - the #include from gc/z/zLiveMap.inline.hpp should maybe be moved to >> zLiveMap.cpp >> >> Improvements to atomic.hpp includes: >> >> In the following files the include of atomic.hpp should be removed as >> it seems unnecessary: >> - share/utilities/bitMap.hpp >> - share/oops/oop.hpp > > These needs atomic.hpp since they use atomic_memory_order. Okay, did not consider these. Good catch. > >> - share/gc/z/zNMethodTable.cpp >> - share/gc/shenandoah/shenandoahForwarding.inline.hpp >> - share/gc/g1/g1CardTable.cpp >> - os_cpu/solaris_x86/os_solaris_x86.cpp >> >> The following need a #include atomic.hpp: >> - share/oops/methodData.cpp >> - share/oops/constantPool.cpp >> - share/memory/metaspace/virtualSpaceNdoe.cpp >> - os/posix/os_posix.cpp >> >> Looks good otherwise. > > Since, I already pushed the original bug, I created a new with your > changes: > https://cr.openjdk.java.net/~stefank/8234897/webrev.01/ > > I intend to take this through our build steps, and push it if it's > succeeds. > This new change looks good. Thanks, Thomas From david.holmes at oracle.com Wed Nov 27 10:47:26 2019 From: david.holmes at oracle.com (David Holmes) Date: Wed, 27 Nov 2019 20:47:26 +1000 Subject: RFR: 8234748: Clean up atomic and orderAccess includes In-Reply-To: References: <9cf98bb9-3862-cfb0-2a4b-3dd35cc250b9@oracle.com> Message-ID: <544d095b-7a7c-0ec4-8815-4280e2b9a2e7@oracle.com> On 27/11/2019 8:34 pm, Stefan Karlsson wrote: > Hi Thomas, > > On 2019-11-27 11:20, Thomas Schatzl wrote: >> Hi Stefan, >> >> ?? thanks for tackling this. >> >> On 26.11.19 12:11, Stefan Karlsson wrote: >>> Hi all, >>> >>> Please review this trivial, but large, patch to cleanup the includes >>> of atomic.hpp and orderAccess.hpp. >>> >>> https://cr.openjdk.java.net/~stefank/8234748/webrev.01/ >>> https://bugs.openjdk.java.net/browse/JDK-8234748 >>> >>> Thanks, >>> StefanK >> >> I did the following bash-fu to find missing/superfluous includes: >> >> ??grep "Atomic::" `find . -name '*.?pp'` | sed 's/\(.*pp\):.*/\1/' | >> uniq | sort > users.txt >> $ grep "atomic.hpp" `find . -name '*.?pp'` | sed 's/\(.*\):.*/\1/' | >> uniq | sort > includers.txt >> $ diff users.txt includers.txt > diff.txt >> >> diff.txt then contained the differences, with some false positives, >> all about containing the keywords in comments. Otherwise I think this >> should be complete. >> >> The same has been done with "OrderAccess::" and "orderAccess.hpp". >> >> Here are the results: >> >> Improvements to orderAccess.hpp includes: >> >> - gc/shenandoah/shenandoahVerifier.cpp misses it >> >> - the #include from gc/z/zLiveMap.inline.hpp should maybe be moved to >> zLiveMap.cpp >> >> Improvements to atomic.hpp includes: >> >> In the following files the include of atomic.hpp should be removed as >> it seems unnecessary: >> - share/utilities/bitMap.hpp >> - share/oops/oop.hpp > > These needs atomic.hpp since they use atomic_memory_order. > >> - share/gc/z/zNMethodTable.cpp >> - share/gc/shenandoah/shenandoahForwarding.inline.hpp >> - share/gc/g1/g1CardTable.cpp >> - os_cpu/solaris_x86/os_solaris_x86.cpp >> >> The following need a #include atomic.hpp: >> - share/oops/methodData.cpp >> - share/oops/constantPool.cpp >> - share/memory/metaspace/virtualSpaceNdoe.cpp >> - os/posix/os_posix.cpp >> >> Looks good otherwise. > > Since, I already pushed the original bug, I created a new with your > changes: > https://cr.openjdk.java.net/~stefank/8234897/webrev.01/ I've verified each of those as well. Thanks, David > > I intend to take this through our build steps, and push it if it's > succeeds. > > Thanks, > StefanK > >> >> Thanks, >> ?? Thomas From thomas.schatzl at oracle.com Wed Nov 27 11:09:25 2019 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 27 Nov 2019 12:09:25 +0100 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> Message-ID: <1283c20e-9cd8-8031-554b-e23b9f5ae7e0@oracle.com> Hi Kim, On 27.11.19 01:39, Kim Barrett wrote: >> On Jun 11, 2019, at 12:42 PM, Kim Barrett wrote: >> >>> On Jun 10, 2019, at 9:14 PM, Kim Barrett wrote: >>> new webrevs: >>> full: http://cr.openjdk.java.net/~kbarrett/8213415/open.01/ >>> incr: http://cr.openjdk.java.net/~kbarrett/8213415/open.01.inc/ >> >> Stefan and I have been talking about this offline. We have some ideas for further changes in >> a slightly different direction, so no point in anyone else reviewing the open.01 changes right now >> (or maybe ever). > > Finally returning to this. Stefan Karlsson and Thomas Shatzl had some > offline feedback on earlier versions that led to some rethinking and > rework. This included an attempt to be a little more consistent with > nomenclature. There are still some lingering naming issues, which > might be worth fixing some other time. > > The basic approach hasn't changed though. From the original RFR: > > Constructing a BitMap now ensures the size is such that rounding it up > to a word boundary won't overflow. This is the new max_size_in_bits() > value. This lets us add some asserts and otherwise tidy things up in > some places by making use of that information. > > This engendered some changes to ParallelGC's ParMarkBitMap. It no > longer uses the obsolete BitMap::word_align_up, instead having its own > internal helper for aligning range ends that knows about invariants in > ParMarkBitMap. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8213415 > > New webrev: > http://cr.openjdk.java.net/~kbarrett/8213415/open.03/ > lgtm. Thanks, Thomas From thomas.schatzl at oracle.com Wed Nov 27 11:11:13 2019 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 27 Nov 2019 12:11:13 +0100 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: <1283c20e-9cd8-8031-554b-e23b9f5ae7e0@oracle.com> References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> <1283c20e-9cd8-8031-554b-e23b9f5ae7e0@oracle.com> Message-ID: Hi again, one thing I forgot: there is a merge error with StefanK's latest changes about includes for atomic/orderAccess.hpp in bitMap.inline?.hpp. I do not need a re-review for that. Thanks, Thomas On 27.11.19 12:09, Thomas Schatzl wrote: > Hi Kim, > > On 27.11.19 01:39, Kim Barrett wrote: >>> On Jun 11, 2019, at 12:42 PM, Kim Barrett >>> wrote: >>> >>>> On Jun 10, 2019, at 9:14 PM, Kim Barrett >>>> wrote: >>>> new webrevs: >>>> full: http://cr.openjdk.java.net/~kbarrett/8213415/open.01/ >>>> incr: http://cr.openjdk.java.net/~kbarrett/8213415/open.01.inc/ >>> >>> Stefan and I have been talking about this offline.? We have some >>> ideas for further changes in >>> a slightly different direction, so no point in anyone else reviewing >>> the open.01 changes right now >>> (or maybe ever). >> >> Finally returning to this.? Stefan Karlsson and Thomas Shatzl had some >> offline feedback on earlier versions that led to some rethinking and >> rework.? This included an attempt to be a little more consistent with >> nomenclature.? There are still some lingering naming issues, which >> might be worth fixing some other time. >> >> The basic approach hasn't changed though.? From the original RFR: >> >> Constructing a BitMap now ensures the size is such that rounding it up >> to a word boundary won't overflow.? This is the new max_size_in_bits() >> value. This lets us add some asserts and otherwise tidy things up in >> some places by making use of that information. >> >> This engendered some changes to ParallelGC's ParMarkBitMap.? It no >> longer uses the obsolete BitMap::word_align_up, instead having its own >> internal helper for aligning range ends that knows about invariants in >> ParMarkBitMap. >> >> CR: >> https://bugs.openjdk.java.net/browse/JDK-8213415 >> >> New webrev: >> http://cr.openjdk.java.net/~kbarrett/8213415/open.03/ >> > > ? lgtm. > > Thanks, > ? Thomas From matthias.baesken at sap.com Wed Nov 27 14:30:06 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Wed, 27 Nov 2019 14:30:06 +0000 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: <5FEB52F4-2AA3-4ECD-A05E-5A2504F49984@oracle.com> References: <5FEB52F4-2AA3-4ECD-A05E-5A2504F49984@oracle.com> Message-ID: ? I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property Hello, I now changed the test to throw a SkippedException : http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.1/ Best regards, Matthias From: Igor Ignatyev Sent: Dienstag, 26. November 2019 18:35 To: Baesken, Matthias Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 for now, we have only test/jtreg-ext/requires/VMProps.java which sets @requires properties; although 'env.core.accessible' and similar properties aren't vm-related properties per-se so VMProps isn't the best place to set them. however VMProps already sets a few non vm properties, and given introduction of another class to set these properties is a bit of hassle, I think it's fine to just add `env.core.accessible` to VMProps (and later rename VMProps to be more appropriate for all kinds of properties). there is one thing which you should be aware than add any new requires properties, jtreg runs VMProps once for *every* execution even if none of "target" tests use @requires, in other words, all test executions pay price of setting all properties, therefore it should be costly. -- Igor On Nov 26, 2019, at 2:07 AM, Baesken, Matthias > wrote: * or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. Hello Igor, that sounds interesting. Who would set the say env.core.accessible property accordingly (so that in our environment on 10.15 it would be false) ? Best regards, Matthias From: Igor Ignatyev > Sent: Montag, 25. November 2019 21:10 To: Baesken, Matthias > Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 Hi Matthias, your solution will hide the fact that the coverage from this test will be missed on macos 10.15+. I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. -- Igor On Nov 25, 2019, at 1:58 AM, Baesken, Matthias > wrote: Hello, the test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 . exception : java.lang.Error: cores is not a directory or does not have write permissions at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) at java.base/java.lang.Thread.run(Thread.java:833) Looks like the test checks that directory /cores is writable : File coresDir = new File("/cores"); if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). So the test fails. My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8234625 http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ Best regards, Matthias From kim.barrett at oracle.com Wed Nov 27 15:08:56 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 27 Nov 2019 10:08:56 -0500 Subject: RFR: 8234779: Provide idiom for declaring classes noncopyable In-Reply-To: <1c34ddb3-02a9-4e5e-406e-24ff39d52353@oracle.com> References: <1c34ddb3-02a9-4e5e-406e-24ff39d52353@oracle.com> Message-ID: > On Nov 27, 2019, at 3:34 AM, Stefan Karlsson wrote: > > Hi Kim, > > On 2019-11-27 01:18, Kim Barrett wrote: >> Please review this change that adds a new macro NONCOPYABLE, for >> declaring a class to be noncopyable. This change also modifies a >> bunch of classes to use the new macro. Most of those classes already >> included equivalent code, and we're just replacing that with uses of >> the macro. > Did you consider doing something like this instead of using this macro? > [...] snip of NonCopyable class for uses as a private member. That's a "well known to be problematic" solution. The NonCopyable member will use additional space in all objects of the containing class, unless there happens to be an alignment gap it can be slipped into. And those are additive if a class contains multiple noncopyable members. A NonCopyable base class will sometimes avoid adding space on platforms that do EBO (Empty Base Optimization). EBO in such a case will still be prevented if the first member of the derived class is similarly derived from NonCopyable. Requiring a base class for this feature is also problematic in the face of HotSpot's allocation base classes and eschewing multiple inheritance. Also, my recollection is that multiple inheritance is a case where some platforms are more likely to fail to do EBO. From robbin.ehn at oracle.com Wed Nov 27 15:25:21 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Wed, 27 Nov 2019 16:25:21 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: References: Message-ID: <1ff4ff37-c65e-43e7-a845-bc6ce06750f0@oracle.com> Hi all, please review. Here is the result after Per's suggestion: http://cr.openjdk.java.net/~rehn/8234796/v2/full/webrev/index.html (incremental made no sense) Due to circular dependency between thread.hpp and handshake.hpp, I moved the ThreadClosure to iterator.hpp, as was suggested offline. Passes t1-3 Thanks, Robbin On 11/26/19 2:07 PM, Robbin Ehn wrote: > Hi all, please review. > > Issue: > https://bugs.openjdk.java.net/browse/JDK-8234796 > Code: > http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ > > The handshake code needs more information about the handshake operation. > We change type from ThreadClosure to HandshakeOperation in Handshake::execute. > This enables us to add more details to the HandshakeOperation as needed going > forward. > > Tested t1 and t1-3 together with the logging improvements in 8234742. > > It was requested that "HandshakeOperation()" would take the name instead having > "virtual const char* name();". Which is in this patch. > > Thanks, Robbin From lutz.schmidt at sap.com Wed Nov 27 15:35:50 2019 From: lutz.schmidt at sap.com (Schmidt, Lutz) Date: Wed, 27 Nov 2019 15:35:50 +0000 Subject: 8234397: add OS uptime information to os::print_os_info output Message-ID: <30E06543-E081-429B-8293-8CA81D1F6870@sap.com> Matthias, your change looks good to me overall. Please note: I'm not a Reviewer! I feel the urge to complain about one thing, though: When calculating the uptime in days, you divide the time retrieved from the system (usually seconds or milliseconds) by a large number. Why do you force that number to be a float? I would prefer the denominator to be an "int" value. Rationale: floats (32bits) are very limited in precision, only y6 to 7 decimal digits. At least in the windows case, where you obtain milliseconds from the system, your denominator is 86,400,000. At first glance, that does not fit into a float mantissa. What saves you here are the prime factors "2" (10 in total). As a result, you only need 17 mantissa bits to represent the denominator. Thanks, Lutz ?On 25.11.19, 09:06, "hotspot-dev on behalf of Baesken, Matthias" wrote: > > The comment in the posix code mentions that it doesn't work on macOS but > doesn't say anything about Linux. Has it been tested on Solaris? > Hi David, it works on Solaris . I think I should adjust the comment (saying macOS AND Linux) . Best regards, Matthias > > > > One example that occurred last week - my colleague Christoph and me > were browsing through an hs_err file of a crash on AIX . > > When looking into the hs_err we wanted to know the uptime because > our latest fontconfig - patches (for getting rid of the crash) needed a > reboot too to really work . > > Unfortunately we could not find the info , and we were disappointed ( > then we noticed the crash is from OpenJDK and not our internal JVM ). > > > > > >>> Bug/webrev : > >>> https://bugs.openjdk.java.net/browse/JDK-8234397 > >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.1/ > >> > >> Can Linux not use the POSIX version? > >> > > > > Unfortunately the posix code does not give the desired result on Linux (at > least on my test machines). > > The comment in the posix code mentions that it doesn't work on macOS but > doesn't say anything about Linux. Has it been tested on Solaris? > > I'm really unsure about this code and am hoping someone more > knowledgeable in this areas can chime in. I'd be less concerned if there > was a single POSIX implementation that worked everywhere. :( Though I > have my general concern about adding yet another potential point of > failure in the error reporting logic. > > Thanks, > David > > > Best regards, Matthias > > From robbin.ehn at oracle.com Wed Nov 27 15:51:48 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Wed, 27 Nov 2019 16:51:48 +0100 Subject: RFR: 8234742: Improve handshake logging In-Reply-To: References: <31e700c7-d2fe-2977-e1e2-13e3cc10927d@oracle.com> Message-ID: Hi David, thanks for having a look! On 11/27/19 4:25 AM, David Holmes wrote: > Hi Robbin, > > On 26/11/2019 11:06 pm, Robbin Ehn wrote: >> Hi, >> >> Here is the logging part separately: >> http://cr.openjdk.java.net/~rehn/8234742/v2/full/webrev/index.html >> >> It contains one additional change from the first version: >> ????? if (number_of_threads_issued < 1) { >> -????? log_debug(handshake)("No threads to handshake."); >> +????? log_handshake_info(start_time_ns, _op->name(), 0, 0, " (no threads)"); >> ??????? return; >> ????? } >> >> Passes t1-3. >> >> So this goes on top of 8234796: >> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ > > src/hotspot/share/runtime/handshake.hpp > > I was surprised that "process_by_vmthread" doesn't really mean that. Perhaps > rename to try_process_by_vmThread? I did, but also renamed: bool handshake_process_by_vmthread() in thread.hpp > > --- > > src/hotspot/share/runtime/handshake.cpp > > 91???? log_info(handshake)("Handshake \"%s\", Targeted threads: %d, Executed by > targeted threads: %d, Total completion time: " JLONG_FORMAT " ns%s", > > Probably better to end with " ns, %s" so you don't have to remember to start the > 'extra' string with a space each time. Fixed, but I didn't like the ending "," when the string was empty, so I did a different fix! > > 168???? log_trace(handshake)("Threads signaled, begin processing blocked threads > by VMThtread") > > Existing typo: VMThtread Fixed! > > Otherwise seems okay. > v3 which is also rebased on latest 8234796: Inc: http://cr.openjdk.java.net/~rehn/8234742/v3/inc/webrev/index.html Full: http://cr.openjdk.java.net/~rehn/8234742/v3/full/webrev/index.html t1-3 Thanks, Robbin > Thanks, > David > ----- > >> Thanks, Robbin >> >> On 11/25/19 5:33 PM, Robbin Ehn wrote: >>> Hi all, please review. >>> >>> There is little useful information in the handshaking logs. >>> This changes the handshakes logs similar to safepoint logs, so the basic need of >>> what handshake operation and how long it took easily can be tracked. >>> Also the per thread log is a bit enhanced. >>> >>> The refactoring using HandshakeOperation instead of a ThreadClosure is not >>> merely for this change. Other changes in the pipeline also require a more >>> complex HandshakeOperation. >>> >>> Issue: >>> https://bugs.openjdk.java.net/browse/JDK-8234742 >>> Changeset: >>> http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ >>> >>> Passes t1-3. >>> >>> Thanks, Robbin >>> >>> Examples: >>> -Xlog:handshake,safepoint >>> >>> [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: 381873579 >>> ns, Reaching safepoint: 451132 ns, At safepoint: 491202 ns, Total: 942334 ns >>> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >>> threads: 25, Executed by targeted threads: 8, Total completion time: 46884 ns >>> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >>> threads: 25, Executed by targeted threads: 10, Total completion time: 94547 ns >>> [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >>> threads: 25, Executed by targeted threads: 10, Total completion time: 33545 ns >>> [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: 4697901 ns, >>> Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, Total: 1680859 ns >>> [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: 25, >>> Executed by targeted threads: 10, Total completion time: 37291 ns >>> [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: 2201206 ns, >>> Reaching safepoint: 295463 ns, At safepoint: 928077 ns, Total: 1223540 ns >>> [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since last: >>> 3161645 ns, Reaching safepoint: 206278 ns, At safepoint: 357284 ns, Total: >>> 563562 ns >>> [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: 1000123769 >>> ns, Reaching safepoint: 526489 ns, At safepoint: 23345 ns, Total: 549834 ns >>> [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted threads: 1, >>> Executed by targeted threads: 0, Total completion time: 41322 ns >>> >>> -Xlog:handshake*=trace >>> >>> [1.259s][trace][handshake ] Threads signaled, begin processing blocked >>> threads by VMThtread >>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f2594022800, is_vm_thread: true, completed in 487 ns >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns >>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f259428a800, is_vm_thread: true, completed in 462 ns >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns >>> ... >>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns >>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns >>> [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted threads: 28, >>> Executed by targeted threads: 4, Total completion time: 629534 ns >>> [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread >>> 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From matthias.baesken at sap.com Wed Nov 27 16:36:38 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Wed, 27 Nov 2019 16:36:38 +0000 Subject: building libjvm with -Os for space optimization - was : RE: RFR: 8234525: enable link-time section-gc for linux s390x to remove unused code Message-ID: Hello Martin, I checked building libjvm.so with -Os (instead of -O3) . I used gcc-7 on linux x86_64 . The size of libjvm.so dropped from 24M (normal night make with -O3) to 18M ( test make with -Os) . (adding the link-time gc might reduce the size by another ~ 10 % , but those 2 builds were without the ltgc ) Cannot say much so far about performance impact . Best regards, Matthias > > Hi Matthias and Erik, > > I also think this is an interesting option. > > I like the idea to generate smaller libraries. In addition to that, I could also > imagine building with -Os (size optimized) by default and only select -O3 for > performance critical files (e.g. C2's register allocation, some gc code, ...). > > If we want to go into such a direction for all linux platforms and want to use > this s390 only change as some kind of pipe cleaner, I think this change is fine > and can get pushed. > Otherwise, I think building s390 differently and not intending to do the same > for other linux platforms would be not so good. > > We should only make sure the exported symbols are set up properly to avoid > that this optimization throws out too much. > > My 50 Cents. > > Best regards, > Martin > From thomas.stuefe at sap.com Wed Nov 27 17:05:50 2019 From: thomas.stuefe at sap.com (Stuefe, Thomas) Date: Wed, 27 Nov 2019 17:05:50 +0000 Subject: RFC: JEP: Elastic Metaspace Message-ID: Hi all, As some of you know, I work on a prototype for a new Metaspace. Now I reached a point where the prototype is done, works well, is stable. Results are promising and I would like to get feedback on how best to proceed. The JEP is still in draft state ([1]). In my mind it is the spiritual counterpart to JEPs like JEP 346: "Promptly Return Unused Committed Memory from G1" or JEP 351: "ZGC: Uncommit Unused Memory". The new Metaspace is a wholesale replacement of the old one and has the following advantages: - It is way more elastic. In situations involving mass class unloading, we see a significant reduction in committed memory. For an extreme example, see [2] which demonstrates how the new implementation recovers from usage spikes compared to the old one. Here we see a reduction of about 70% of Metaspace after class unloading. - There are modest memory savings even without class unloading. With many applications, we see a reduction in Metaspace committed space of about 5-10 %. - (I believe) the new implementation is cleaner and long term cheaper to maintain. It does away with a lot of peculiarities of the old implementation - which had grown organically for a while now. Its sub parts are cleanly separated, and can be changed, tested and even replaced individually. If you'd like to take a look and give the prototype a spin, it lives in the jdk-sandbox repository, under the branch "stuefe-new-metaspace-branch" [3]. --- A quick run through what changed with the new Metaspace, what stayed the same: - We still use mmap(). We still have two spaces, the non-class-space and the class space. The same basic layout - a chained list of memory regions for the non-class metaspace, a contiguous region for the Compressed Class space. (We could of course question this setup. For example, we could get rid of the non-class-space region chain, and let everything live in a pre-reserved contiguous range - basically the former class space, but now containing all metadata. This would have some technical benefits at the cost of loosing the potential of unlimited, "zero maintenance" growth. But in conversations with Oracle I found that this was not desired, and it is not really that important.) - So we reserve memory like we did before, but do not commit it with the typical HWM scheme; instead, the memory is divided into homogenous sections of n pages, and each section ("commit granule") can be committed/uncommitted individually. - Atop of that model we still have chunks like we did before, but these chunks can be committed, uncommitted or partly committed. When memory is allocated from a chunk, the underlying commit granules are committed automatically. That makes it possible to hand large chunks to class loaders and still not pay the full price up front. - Chunk sizes follow a power-2-buddy-allocator [4] scheme: they are sized from very small (1K) up to large (4M) in power-2-steps. On allocation, larger chunks are split to produce the desired chunk size; on deallocation, chunks are fused with neighboring buddies to form larger chunks. We also do not have humongous chunks anymore since they are unnecessary. In principle, we have a form of weird, crooked buddy allocator even in the current Metaspace, since [5], when we introduced chunk coalescation with JDK 11. However, due to the odd chunk geometry the current allocator has, and due to things like humongous chunks, the current implementation is inefficient, costs more, and is way more complicated than necessary. The beauty about buddy style allocation is that it is dead simple and cheap to implement, and that everyone knows it - so this makes maintenance easier. - When the loader is collected, the chunks are released into freelists; they are fused in buddy-style-fashion, forming larger chunks. If chunk size surpasses a (tunable) threshold, memory below that chunk is uncommitted. Please see the JEP description [1] for more details. -- This is ongoing work, and not every improvement is listed in the JEP since it is supposed to be a high-level view. I am currently at the "tweaking" phase, tuning and building small additions to make the Metaspace allocator perform smarter in corner cases. One example would be the treatment of "Micro-ClassLoaderData" - CLDs which only load one class, e.g. Reflection delegator classes or hidden classes for lambdas. These CLDs will only ever allocate one InstanceKlass, and in these cases it is inefficient to use the full SpaceManager-Chunk-Machinery for the class space part. That can be done much simpler and would save about 10% of committed Metaspace in Lambda-heavy cases. -- But I am not sure how to proceed now. So I would love to get feedback on this. My plans are to get this into JDK 15 if possible. Long term, I also would love to backport this to older releases - since it is a pretty isolated piece of machinery with only loose ties to the rest of the VM, that should be possible without too much problems. I am also not sure if JEP is the right vehicle. I would not mind a JEP number for this, but my priority is to bring this in. If a JEP makes sense, I would be happy if someone were to sponsor this. Thank you, Thomas [1] https://bugs.openjdk.java.net/browse/JDK-8221173 [2] https://bugs.openjdk.java.net/secure/attachment/85771/test-results.pdf [3] http://hg.openjdk.java.net/jdk/sandbox/shortlog/54750b448264 [4] https://en.wikipedia.org/wiki/Buddy_memory_allocation[5] https://bugs.openjdk.java.net/browse/JDK-8198423 From claes.redestad at oracle.com Wed Nov 27 17:57:06 2019 From: claes.redestad at oracle.com (Claes Redestad) Date: Wed, 27 Nov 2019 18:57:06 +0100 Subject: building libjvm with -Os for space optimization - was : RE: RFR: 8234525: enable link-time section-gc for linux s390x to remove unused code In-Reply-To: References: Message-ID: <3bffe1cf-4567-0cf6-4bfb-ad79bd0b9596@oracle.com> Hi, we discussed doing the opposite for Mac OS X recently, where builds are currently set to -Os by default. -O3 helped various networking (micro)benchmarks by up to 20%. Rather than doing -Os by default and then cherry-pick things over to -O3 on a case-by-case basis, I'd suggest the opposite: keep -O3 as the default, start evaluating -Os on a case-by-case basis. This allows for an incremental approach where we identify things that are definitely not performance critical, e.g., never shows up in profiles, and switch those compilation units over to -Os. Check for harmful performance impact and expected footprint improvement; rinse; repeat. $.02 /Claes On 2019-11-27 17:36, Baesken, Matthias wrote: > Hello Martin, I checked building libjvm.so with -Os (instead of -O3) . > > I used gcc-7 on linux x86_64 . > The size of libjvm.so dropped from 24M (normal night make with -O3) to 18M ( test make with -Os) . > (adding the link-time gc might reduce the size by another ~ 10 % , but those 2 builds were without the ltgc ) > > Cannot say much so far about performance impact . > > Best regards, Matthias > > > >> >> Hi Matthias and Erik, >> >> I also think this is an interesting option. >> >> I like the idea to generate smaller libraries. In addition to that, I could also >> imagine building with -Os (size optimized) by default and only select -O3 for >> performance critical files (e.g. C2's register allocation, some gc code, ...). >> >> If we want to go into such a direction for all linux platforms and want to use >> this s390 only change as some kind of pipe cleaner, I think this change is fine >> and can get pushed. >> Otherwise, I think building s390 differently and not intending to do the same >> for other linux platforms would be not so good. >> >> We should only make sure the exported symbols are set up properly to avoid >> that this optimization throws out too much. >> >> My 50 Cents. >> >> Best regards, >> Martin >> > From martin.doerr at sap.com Wed Nov 27 18:03:59 2019 From: martin.doerr at sap.com (Doerr, Martin) Date: Wed, 27 Nov 2019 18:03:59 +0000 Subject: building libjvm with -Os for space optimization - was : RE: RFR: 8234525: enable link-time section-gc for linux s390x to remove unused code In-Reply-To: <3bffe1cf-4567-0cf6-4bfb-ad79bd0b9596@oracle.com> References: <3bffe1cf-4567-0cf6-4bfb-ad79bd0b9596@oracle.com> Message-ID: Hi Claes, that kind of surprises me. I'd expect files which rather benefit from -O3 to be far less than those which benefit from -Os. Most performance critical code lives inside the code cache and is not dependent on C++ compiler optimizations. I'd expect GC code, C2's register allocation and a few runtime files to be the most performance critical C++ code. So the list of files for -Os may become long. Yeah, I think we should use native profiling information to find out what's really going on. Your idea to change file by file and check for performance regression makes sense to me, though. Best regards, Martin > -----Original Message----- > From: Claes Redestad > Sent: Mittwoch, 27. November 2019 18:57 > To: Baesken, Matthias ; Doerr, Martin > ; Erik Joelsson ; 'build- > dev at openjdk.java.net' ; 'hotspot- > dev at openjdk.java.net' > Subject: Re: building libjvm with -Os for space optimization - was : RE: RFR: > 8234525: enable link-time section-gc for linux s390x to remove unused code > > Hi, > > we discussed doing the opposite for Mac OS X recently, where builds are > currently set to -Os by default. -O3 helped various networking > (micro)benchmarks by up to 20%. > > Rather than doing -Os by default and then cherry-pick things over to -O3 > on a case-by-case basis, I'd suggest the opposite: keep -O3 as the > default, start evaluating -Os on a case-by-case basis. This allows for > an incremental approach where we identify things that are definitely not > performance critical, e.g., never shows up in profiles, and switch those > compilation units over to -Os. Check for harmful performance impact and > expected footprint improvement; rinse; repeat. > > $.02 > > /Claes > > > On 2019-11-27 17:36, Baesken, Matthias wrote: > > Hello Martin, I checked building libjvm.so with -Os (instead of -O3) . > > > > I used gcc-7 on linux x86_64 . > > The size of libjvm.so dropped from 24M (normal night make with -O3) > to 18M ( test make with -Os) . > > (adding the link-time gc might reduce the size by another ~ 10 % , but > those 2 builds were without the ltgc ) > > > > Cannot say much so far about performance impact . > > > > Best regards, Matthias > > > > > > > >> > >> Hi Matthias and Erik, > >> > >> I also think this is an interesting option. > >> > >> I like the idea to generate smaller libraries. In addition to that, I could also > >> imagine building with -Os (size optimized) by default and only select -O3 > for > >> performance critical files (e.g. C2's register allocation, some gc code, ...). > >> > >> If we want to go into such a direction for all linux platforms and want to > use > >> this s390 only change as some kind of pipe cleaner, I think this change is > fine > >> and can get pushed. > >> Otherwise, I think building s390 differently and not intending to do the > same > >> for other linux platforms would be not so good. > >> > >> We should only make sure the exported symbols are set up properly to > avoid > >> that this optimization throws out too much. > >> > >> My 50 Cents. > >> > >> Best regards, > >> Martin > >> > > From igor.ignatyev at oracle.com Wed Nov 27 18:15:27 2019 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Wed, 27 Nov 2019 18:15:27 +0000 (UTC) Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: References: <5FEB52F4-2AA3-4ECD-A05E-5A2504F49984@oracle.com> Message-ID: <04CB4CC0-F5A5-4FBC-9044-9A746204C928@oracle.com> SkippedException's message is used by jtreg as a reason why a test was skipped, so I'd suggest to change it to something like '/cores is not writable'. I'm also not sure if we really need to check version string at L#123 now, but it's fine if you decide to keep it. Thanks. -- Igor > On Nov 27, 2019, at 6:30 AM, Baesken, Matthias wrote: > > ? I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property > > > Hello, I now changed the test to throw a SkippedException : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.1/ > > > Best regards, Matthias > > > From: Igor Ignatyev > Sent: Dienstag, 26. November 2019 18:35 > To: Baesken, Matthias > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 > > for now, we have only test/jtreg-ext/requires/VMProps.java which sets @requires properties; although 'env.core.accessible' and similar properties aren't vm-related properties per-se so VMProps isn't the best place to set them. however VMProps already sets a few non vm properties, and given introduction of another class to set these properties is a bit of hassle, I think it's fine to just add `env.core.accessible` to VMProps (and later rename VMProps to be more appropriate for all kinds of properties). > > there is one thing which you should be aware than add any new requires properties, jtreg runs VMProps once for *every* execution even if none of "target" tests use @requires, in other words, all test executions pay price of setting all properties, therefore it should be costly. > > -- Igor > > > On Nov 26, 2019, at 2:07 AM, Baesken, Matthias > wrote: > > > or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. > > Hello Igor, that sounds interesting. > > Who would set the say env.core.accessible property accordingly (so that in our environment on 10.15 it would be false) ? > > Best regards, Matthias > > > From: Igor Ignatyev > > Sent: Montag, 25. November 2019 21:10 > To: Baesken, Matthias > > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 > > Hi Matthias, > > your solution will hide the fact that the coverage from this test will be missed on macos 10.15+. I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. > > -- Igor > > > > On Nov 25, 2019, at 1:58 AM, Baesken, Matthias > wrote: > > Hello, the test > serviceability/sa/ClhsdbCDSCore.java > fails on macOS 10.15 . > exception : > java.lang.Error: cores is not a directory or does not have write permissions > at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) > at java.base/java.lang.Thread.run(Thread.java:833) > > Looks like the test checks that directory /cores is writable : > File coresDir = new File("/cores"); > if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail > However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). > So the test fails. > > My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . > > > > Bug/webrev : > > https://bugs.openjdk.java.net/browse/JDK-8234625 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ > > > Best regards, Matthias From kim.barrett at oracle.com Wed Nov 27 21:49:26 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 27 Nov 2019 16:49:26 -0500 Subject: RFR: 8234779: Provide idiom for declaring classes noncopyable In-Reply-To: <233a779d-ae3b-4856-9c44-dd81bfceab6e@oracle.com> References: <233a779d-ae3b-4856-9c44-dd81bfceab6e@oracle.com> Message-ID: <129579A4-01C4-4F62-9582-06BE19CB13C0@oracle.com> > On Nov 27, 2019, at 3:43 AM, Per Liden wrote: > > Please don't add this :( I don't think this adds any value, it add another ugly macros I know I will never want to use. I'd much prefer to read real C++ instead of some macro that hides what's going on. Not being explicit about copy functions or noncopyability (e.g. not following the Rule of 3) can and has resulted in bugs. C++ will silently create the used functions with default definitions that aren't at all what one wants in some cases. The Rule of 3 makes it easer to read and understand code, because certain classes of easily overlooked errors are prevented by the compiler and simply cannot happen by design. That's why it's a "rule" in the wider community, even though not so much in HotSpot code, to our detriment in my opinion. The C++03 idiom of private declared but not defined copy ctor and assignment operator is, so far as I know, the best mechanism available for making a class noncopyable. All other approaches I know of have unpleasant side effects. That idiom is rather wordy and indirect though. In particular, it is generally accompanied by comments indicating that this is to make the class noncopyable, or that the declared functions are not defined (not always with a reason, so that needs to be inferred). Failure to provide such comments means the reader may need to check for a definition in order to determine whether that idiom is being used, or whether the definitions are just not inline. The proposed macro significantly reduces that wordiness. Far more importantly, it makes the intent entirely self-evident; there's no need for any explanatory comments. He C++11 idiom is slightly different, in that deleted definitions should be used rather than leaving the operations undefined. That's easily accomodated with this macro; a couple of small changes to the macro and all uses are done. (There is a benefit to making the deleted definitions public with C++11, probably getting a better error message, but that's chrome and can be improved lazily as code gets touched.) Much of software is about hiding details, e.g. abstraction. (It would be nice if C++ had a better mechanism for syntactic abstraction.) So I disagree with the given rationale for objecting to this proposal. From claes.redestad at oracle.com Wed Nov 27 22:35:15 2019 From: claes.redestad at oracle.com (Claes Redestad) Date: Wed, 27 Nov 2019 23:35:15 +0100 Subject: building libjvm with -Os for space optimization - was : RE: RFR: 8234525: enable link-time section-gc for linux s390x to remove unused code In-Reply-To: References: <3bffe1cf-4567-0cf6-4bfb-ad79bd0b9596@oracle.com> Message-ID: <84420dee-9889-b320-253b-f00551a2c9da@oracle.com> Hi Martin, On 2019-11-27 19:03, Doerr, Martin wrote: > Hi Claes, > > that kind of surprises me. I'd expect files which rather benefit from -O3 to be far less than those which benefit from -Os. > Most performance critical code lives inside the code cache and is not dependent on C++ compiler optimizations. > I'd expect GC code, C2's register allocation and a few runtime files to be the most performance critical C++ code. > So the list of files for -Os may become long. that might very well be the end result, and once/if we've gone down this path long enough to see that -O3 becomes the exception, we can re- examine the default. Changing the default and then trying to recuperate would be hard/impossible to do incrementally. > > Yeah, I think we should use native profiling information to find out what's really going on > > Your idea to change file by file and check for performance regression makes sense to me, though Hopefully we don't have to do one RFE per file.. :-) /Claes From kim.barrett at oracle.com Thu Nov 28 03:34:34 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Wed, 27 Nov 2019 22:34:34 -0500 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> <1283c20e-9cd8-8031-554b-e23b9f5ae7e0@oracle.com> Message-ID: > On Nov 27, 2019, at 6:11 AM, Thomas Schatzl wrote: > > Hi again, > > one thing I forgot: there is a merge error with StefanK's latest changes about includes for atomic/orderAccess.hpp in bitMap.inline?.hpp. I do not need a re-review for that. Oops, you are right, the include of orderAccess.hpp should be removed from my changes. Thanks for catching that. From david.holmes at oracle.com Thu Nov 28 05:31:23 2019 From: david.holmes at oracle.com (David Holmes) Date: Thu, 28 Nov 2019 15:31:23 +1000 Subject: RFR: 8234742: Improve handshake logging In-Reply-To: References: <31e700c7-d2fe-2977-e1e2-13e3cc10927d@oracle.com> Message-ID: <31e885e2-1611-d40b-39c4-04c7cb74118b@oracle.com> Hi Robbin, Updates all seem fine. Thanks, David On 28/11/2019 1:51 am, Robbin Ehn wrote: > Hi David, thanks for having a look! > > On 11/27/19 4:25 AM, David Holmes wrote: >> Hi Robbin, >> >> On 26/11/2019 11:06 pm, Robbin Ehn wrote: >>> Hi, >>> >>> Here is the logging part separately: >>> http://cr.openjdk.java.net/~rehn/8234742/v2/full/webrev/index.html >>> >>> It contains one additional change from the first version: >>> ????? if (number_of_threads_issued < 1) { >>> -????? log_debug(handshake)("No threads to handshake."); >>> +????? log_handshake_info(start_time_ns, _op->name(), 0, 0, " (no >>> threads)"); >>> ??????? return; >>> ????? } >>> >>> Passes t1-3. >>> >>> So this goes on top of 8234796: >>> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >> >> src/hotspot/share/runtime/handshake.hpp >> >> I was surprised that "process_by_vmthread" doesn't really mean that. >> Perhaps rename to try_process_by_vmThread? > > I did, but also renamed: > bool handshake_process_by_vmthread() in thread.hpp > >> >> --- >> >> src/hotspot/share/runtime/handshake.cpp >> >> 91???? log_info(handshake)("Handshake \"%s\", Targeted threads: %d, >> Executed by targeted threads: %d, Total completion time: " >> JLONG_FORMAT " ns%s", >> >> Probably better to end with " ns, %s" so you don't have to remember to >> start the 'extra' string with a space each time. > > Fixed, but I didn't like the ending "," when the string was empty, so I > did a > different fix! > >> >> 168???? log_trace(handshake)("Threads signaled, begin processing >> blocked threads by VMThtread") >> >> Existing typo: VMThtread > > Fixed! > >> >> Otherwise seems okay. >> > > v3 which is also rebased on latest 8234796: > Inc: > http://cr.openjdk.java.net/~rehn/8234742/v3/inc/webrev/index.html > Full: > http://cr.openjdk.java.net/~rehn/8234742/v3/full/webrev/index.html > > t1-3 > > Thanks, Robbin > >> Thanks, >> David >> ----- >> >>> Thanks, Robbin >>> >>> On 11/25/19 5:33 PM, Robbin Ehn wrote: >>>> Hi all, please review. >>>> >>>> There is little useful information in the handshaking logs. >>>> This changes the handshakes logs similar to safepoint logs, so the >>>> basic need of >>>> what handshake operation and how long it took easily can be tracked. >>>> Also the per thread log is a bit enhanced. >>>> >>>> The refactoring using HandshakeOperation instead of a ThreadClosure >>>> is not >>>> merely for this change. Other changes in the pipeline also require a >>>> more >>>> complex HandshakeOperation. >>>> >>>> Issue: >>>> https://bugs.openjdk.java.net/browse/JDK-8234742 >>>> Changeset: >>>> http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ >>>> >>>> Passes t1-3. >>>> >>>> Thanks, Robbin >>>> >>>> Examples: >>>> -Xlog:handshake,safepoint >>>> >>>> [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: >>>> 381873579 ns, Reaching safepoint: 451132 ns, At safepoint: 491202 >>>> ns, Total: 942334 ns >>>> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", >>>> Targeted threads: 25, Executed by targeted threads: 8, Total >>>> completion time: 46884 ns >>>> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", >>>> Targeted threads: 25, Executed by targeted threads: 10, Total >>>> completion time: 94547 ns >>>> [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", >>>> Targeted threads: 25, Executed by targeted threads: 10, Total >>>> completion time: 33545 ns >>>> [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: >>>> 4697901 ns, Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, >>>> Total: 1680859 ns >>>> [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: >>>> 25, Executed by targeted threads: 10, Total completion time: 37291 ns >>>> [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: >>>> 2201206 ns, Reaching safepoint: 295463 ns, At safepoint: 928077 ns, >>>> Total: 1223540 ns >>>> [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since >>>> last: 3161645 ns, Reaching safepoint: 206278 ns, At safepoint: >>>> 357284 ns, Total: 563562 ns >>>> [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: >>>> 1000123769 ns, Reaching safepoint: 526489 ns, At safepoint: 23345 >>>> ns, Total: 549834 ns >>>> [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted >>>> threads: 1, Executed by targeted threads: 0, Total completion time: >>>> 41322 ns >>>> >>>> -Xlog:handshake*=trace >>>> >>>> [1.259s][trace][handshake ] Threads signaled, begin processing >>>> blocked threads by VMThtread >>>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>> 0x00007f2594022800, is_vm_thread: true, completed in 487 ns >>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>> 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns >>>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>> 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns >>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>> 0x00007f259428a800, is_vm_thread: true, completed in 462 ns >>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>> 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns >>>> ... >>>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>> 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns >>>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>> 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns >>>> [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted >>>> threads: 28, Executed by targeted threads: 4, Total completion time: >>>> 629534 ns >>>> [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread >>>> 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From david.holmes at oracle.com Thu Nov 28 06:21:52 2019 From: david.holmes at oracle.com (David Holmes) Date: Thu, 28 Nov 2019 16:21:52 +1000 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: <1ff4ff37-c65e-43e7-a845-bc6ce06750f0@oracle.com> References: <1ff4ff37-c65e-43e7-a845-bc6ce06750f0@oracle.com> Message-ID: <9dfd8c82-a7b2-fe4d-4631-31d0d5b29ff0@oracle.com> Hi Robbin, On 28/11/2019 1:25 am, Robbin Ehn wrote: > Hi all, please review. > > Here is the result after Per's suggestion: > http://cr.openjdk.java.net/~rehn/8234796/v2/full/webrev/index.html > (incremental made no sense) > > Due to circular dependency between thread.hpp and handshake.hpp, I moved > the ThreadClosure to iterator.hpp, as was suggested offline. That all looks good to me! Thanks for splitting these up. Thanks, David ----- > Passes t1-3 > > Thanks, Robbin > > On 11/26/19 2:07 PM, Robbin Ehn wrote: >> Hi all, please review. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8234796 >> Code: >> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >> >> The handshake code needs more information about the handshake operation. >> We change type from ThreadClosure to HandshakeOperation in >> Handshake::execute. >> This enables us to add more details to the HandshakeOperation as >> needed going forward. >> >> Tested t1 and t1-3 together with the logging improvements in 8234742. >> >> It was requested that "HandshakeOperation()" would take the name >> instead having "virtual const char* name();". Which is in this patch. >> >> Thanks, Robbin From per.liden at oracle.com Thu Nov 28 07:23:26 2019 From: per.liden at oracle.com (Per Liden) Date: Thu, 28 Nov 2019 08:23:26 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: <1ff4ff37-c65e-43e7-a845-bc6ce06750f0@oracle.com> References: <1ff4ff37-c65e-43e7-a845-bc6ce06750f0@oracle.com> Message-ID: <4b7d0895-f297-1c71-753c-55c42456078e@oracle.com> On 11/27/19 4:25 PM, Robbin Ehn wrote: > Hi all, please review. > > Here is the result after Per's suggestion: > http://cr.openjdk.java.net/~rehn/8234796/v2/full/webrev/index.html > (incremental made no sense) > > Due to circular dependency between thread.hpp and handshake.hpp, I moved > the > ThreadClosure to iterator.hpp, as was suggested offline. Thanks for making that change, Robbin! Looks good to me. cheers, /Per > > Passes t1-3 > > Thanks, Robbin > > On 11/26/19 2:07 PM, Robbin Ehn wrote: >> Hi all, please review. >> >> Issue: >> https://bugs.openjdk.java.net/browse/JDK-8234796 >> Code: >> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >> >> The handshake code needs more information about the handshake operation. >> We change type from ThreadClosure to HandshakeOperation in >> Handshake::execute. >> This enables us to add more details to the HandshakeOperation as >> needed going forward. >> >> Tested t1 and t1-3 together with the logging improvements in 8234742. >> >> It was requested that "HandshakeOperation()" would take the name >> instead having "virtual const char* name();". Which is in this patch. >> >> Thanks, Robbin From matthias.baesken at sap.com Thu Nov 28 08:33:17 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Thu, 28 Nov 2019 08:33:17 +0000 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: <04CB4CC0-F5A5-4FBC-9044-9A746204C928@oracle.com> References: <5FEB52F4-2AA3-4ECD-A05E-5A2504F49984@oracle.com> <04CB4CC0-F5A5-4FBC-9044-9A746204C928@oracle.com> Message-ID: * so I'd suggest to change it to something like '/cores is not writable'. Hi Igor, done : http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.2/ May I add you as reviewer now ? Best regards, Matthias From: Igor Ignatyev Sent: Mittwoch, 27. November 2019 19:15 To: Baesken, Matthias Cc: hotspot-dev at openjdk.java.net; Langer, Christoph Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 SkippedException's message is used by jtreg as a reason why a test was skipped, so I'd suggest to change it to something like '/cores is not writable'. I'm also not sure if we really need to check version string at L#123 now, but it's fine if you decide to keep it. Thanks. -- Igor On Nov 27, 2019, at 6:30 AM, Baesken, Matthias > wrote: > I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property Hello, I now changed the test to throw a SkippedException : http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.1/ Best regards, Matthias From: Igor Ignatyev > Sent: Dienstag, 26. November 2019 18:35 To: Baesken, Matthias > Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 for now, we have only test/jtreg-ext/requires/VMProps.java which sets @requires properties; although 'env.core.accessible' and similar properties aren't vm-related properties per-se so VMProps isn't the best place to set them. however VMProps already sets a few non vm properties, and given introduction of another class to set these properties is a bit of hassle, I think it's fine to just add `env.core.accessible` to VMProps (and later rename VMProps to be more appropriate for all kinds of properties). there is one thing which you should be aware than add any new requires properties, jtreg runs VMProps once for *every* execution even if none of "target" tests use @requires, in other words, all test executions pay price of setting all properties, therefore it should be costly. -- Igor On Nov 26, 2019, at 2:07 AM, Baesken, Matthias > wrote: * or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. Hello Igor, that sounds interesting. Who would set the say env.core.accessible property accordingly (so that in our environment on 10.15 it would be false) ? Best regards, Matthias From: Igor Ignatyev > Sent: Montag, 25. November 2019 21:10 To: Baesken, Matthias > Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 Hi Matthias, your solution will hide the fact that the coverage from this test will be missed on macos 10.15+. I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. -- Igor On Nov 25, 2019, at 1:58 AM, Baesken, Matthias > wrote: Hello, the test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 . exception : java.lang.Error: cores is not a directory or does not have write permissions at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) at java.base/java.lang.Thread.run(Thread.java:833) Looks like the test checks that directory /cores is writable : File coresDir = new File("/cores"); if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). So the test fails. My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8234625 http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ Best regards, Matthias From per.liden at oracle.com Thu Nov 28 08:50:11 2019 From: per.liden at oracle.com (Per Liden) Date: Thu, 28 Nov 2019 09:50:11 +0100 Subject: RFR: 8234779: Provide idiom for declaring classes noncopyable In-Reply-To: <129579A4-01C4-4F62-9582-06BE19CB13C0@oracle.com> References: <233a779d-ae3b-4856-9c44-dd81bfceab6e@oracle.com> <129579A4-01C4-4F62-9582-06BE19CB13C0@oracle.com> Message-ID: Hi Kim, On 11/27/19 10:49 PM, Kim Barrett wrote: >> On Nov 27, 2019, at 3:43 AM, Per Liden wrote: >> >> Please don't add this :( I don't think this adds any value, it add another ugly macros I know I will never want to use. I'd much prefer to read real C++ instead of some macro that hides what's going on. > > Not being explicit about copy functions or noncopyability (e.g. not > following the Rule of 3) can and has resulted in bugs. C++ will > silently create the used functions with default definitions that > aren't at all what one wants in some cases. > > The Rule of 3 makes it easer to read and understand code, because > certain classes of easily overlooked errors are prevented by the > compiler and simply cannot happen by design. That's why it's a "rule" > in the wider community, even though not so much in HotSpot code, to > our detriment in my opinion. > > The C++03 idiom of private declared but not defined copy ctor and > assignment operator is, so far as I know, the best mechanism available > for making a class noncopyable. All other approaches I know of have > unpleasant side effects. No objections here. > > That idiom is rather wordy and indirect though. In particular, it > is generally accompanied by comments indicating that this is to make > the class noncopyable, or that the declared functions are not defined > (not always with a reason, so that needs to be inferred). Failure to > provide such comments means the reader may need to check for a > definition in order to determine whether that idiom is being used, or > whether the definitions are just not inline. > > The proposed macro significantly reduces that wordiness. Far more > importantly, it makes the intent entirely self-evident; there's no > need for any explanatory comments. My objection is that you are effectively moving us _away_ from a well known C++idiom, since people tend to read code before it goes through the pre-processor. Once we have C++11 support we can easily switch over to using "= delete", and anything that was previously ambiguous or needed a comment will become clear, and our code would stay idiomatic. > > He C++11 idiom is slightly different, in that deleted definitions > should be used rather than leaving the operations undefined. That's > easily accomodated with this macro; a couple of small changes to the > macro and all uses are done. (There is a benefit to making the > deleted definitions public with C++11, probably getting a better error > message, but that's chrome and can be improved lazily as code gets > touched.) Adding "= delete" is good, but it will be no more work than it was to create this patch, so that can't be an argument in favor of this patch. cheers, /Per From christoph.langer at sap.com Thu Nov 28 08:55:29 2019 From: christoph.langer at sap.com (Langer, Christoph) Date: Thu, 28 Nov 2019 08:55:29 +0000 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: References: <5FEB52F4-2AA3-4ECD-A05E-5A2504F49984@oracle.com> <04CB4CC0-F5A5-4FBC-9044-9A746204C928@oracle.com> Message-ID: Hi Matthias, First of all, thanks for tackling this issue which pops up with MacOS 10.15. In the test, however, I think we should not/don?t need to query for the explicit MacOS version. Just do the writability check and if it fails, throw the SkippedException. As for the message text, I?d prefer: ?Directory \?? + coresDir +?\? is not writable?. I think that?s both, explicit and concise. @Igor, what do you think? And there?s one last thing what I was thinking about: Is it really necessary that /cores is writable for the user? Maybe the system can still write a core into /cores? I?m trying to verify this now on my Catalina MacBook? will let you know. Cheers Christoph From: Baesken, Matthias Sent: Donnerstag, 28. November 2019 09:33 To: Igor Ignatyev Cc: hotspot-dev at openjdk.java.net; Langer, Christoph Subject: RE: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 * so I'd suggest to change it to something like '/cores is not writable'. Hi Igor, done : http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.2/ May I add you as reviewer now ? Best regards, Matthias From: Igor Ignatyev > Sent: Mittwoch, 27. November 2019 19:15 To: Baesken, Matthias > Cc: hotspot-dev at openjdk.java.net; Langer, Christoph > Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 SkippedException's message is used by jtreg as a reason why a test was skipped, so I'd suggest to change it to something like '/cores is not writable'. I'm also not sure if we really need to check version string at L#123 now, but it's fine if you decide to keep it. Thanks. -- Igor On Nov 27, 2019, at 6:30 AM, Baesken, Matthias > wrote: > I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property Hello, I now changed the test to throw a SkippedException : http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.1/ Best regards, Matthias From: Igor Ignatyev > Sent: Dienstag, 26. November 2019 18:35 To: Baesken, Matthias > Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 for now, we have only test/jtreg-ext/requires/VMProps.java which sets @requires properties; although 'env.core.accessible' and similar properties aren't vm-related properties per-se so VMProps isn't the best place to set them. however VMProps already sets a few non vm properties, and given introduction of another class to set these properties is a bit of hassle, I think it's fine to just add `env.core.accessible` to VMProps (and later rename VMProps to be more appropriate for all kinds of properties). there is one thing which you should be aware than add any new requires properties, jtreg runs VMProps once for *every* execution even if none of "target" tests use @requires, in other words, all test executions pay price of setting all properties, therefore it should be costly. -- Igor On Nov 26, 2019, at 2:07 AM, Baesken, Matthias > wrote: * or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. Hello Igor, that sounds interesting. Who would set the say env.core.accessible property accordingly (so that in our environment on 10.15 it would be false) ? Best regards, Matthias From: Igor Ignatyev > Sent: Montag, 25. November 2019 21:10 To: Baesken, Matthias > Cc: hotspot-dev at openjdk.java.net Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 Hi Matthias, your solution will hide the fact that the coverage from this test will be missed on macos 10.15+. I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. -- Igor On Nov 25, 2019, at 1:58 AM, Baesken, Matthias > wrote: Hello, the test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 . exception : java.lang.Error: cores is not a directory or does not have write permissions at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) at java.base/java.lang.Thread.run(Thread.java:833) Looks like the test checks that directory /cores is writable : File coresDir = new File("/cores"); if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). So the test fails. My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . Bug/webrev : https://bugs.openjdk.java.net/browse/JDK-8234625 http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ Best regards, Matthias From matthias.baesken at sap.com Thu Nov 28 09:02:01 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Thu, 28 Nov 2019 09:02:01 +0000 Subject: 8234397: add OS uptime information to os::print_os_info output In-Reply-To: <30E06543-E081-429B-8293-8CA81D1F6870@sap.com> References: <30E06543-E081-429B-8293-8CA81D1F6870@sap.com> Message-ID: Hi Lutz, I adjusted the days calculation, new webrev : http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.2/ Best regards, Matthias > Matthias, > > your change looks good to me overall. Please note: I'm not a Reviewer! > > I feel the urge to complain about one thing, though: > When calculating the uptime in days, you divide the time retrieved from the > system (usually seconds or milliseconds) by a large number. Why do you > force that number to be a float? I would prefer the denominator to be an > "int" value. > > Rationale: floats (32bits) are very limited in precision, only y6 to 7 decimal > digits. At least in the windows case, where you obtain milliseconds from the > system, your denominator is 86,400,000. At first glance, that does not fit into > a float mantissa. What saves you here are the prime factors "2" (10 in total). > As a result, you only need 17 mantissa bits to represent the denominator. > > Thanks, > Lutz > > ?On 25.11.19, 09:06, "hotspot-dev on behalf of Baesken, Matthias" dev-bounces at openjdk.java.net on behalf of matthias.baesken at sap.com> > wrote: > > > > > The comment in the posix code mentions that it doesn't work on macOS > but > > doesn't say anything about Linux. Has it been tested on Solaris? > > > > Hi David, it works on Solaris . > I think I should adjust the comment (saying macOS AND Linux) . > > Best regards, Matthias > > > > > > > > One example that occurred last week - my colleague Christoph and me > > were browsing through an hs_err file of a crash on AIX . > > > When looking into the hs_err we wanted to know the uptime because > > our latest fontconfig - patches (for getting rid of the crash) needed a > > reboot too to really work . > > > Unfortunately we could not find the info , and we were disappointed > ( > > then we noticed the crash is from OpenJDK and not our internal JVM ). > > > > > > > > >>> Bug/webrev : > > >>> https://bugs.openjdk.java.net/browse/JDK-8234397 > > >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.1/ > > >> > > >> Can Linux not use the POSIX version? > > >> > > > > > > Unfortunately the posix code does not give the desired result on Linux > (at > > least on my test machines). > > > > The comment in the posix code mentions that it doesn't work on macOS > but > > doesn't say anything about Linux. Has it been tested on Solaris? > > > > I'm really unsure about this code and am hoping someone more > > knowledgeable in this areas can chime in. I'd be less concerned if there > > was a single POSIX implementation that worked everywhere. :( Though I > > have my general concern about adding yet another potential point of > > failure in the error reporting logic. > > > > Thanks, > > David > > > > > Best regards, Matthias > > > > From martin.doerr at sap.com Thu Nov 28 09:24:18 2019 From: martin.doerr at sap.com (Doerr, Martin) Date: Thu, 28 Nov 2019 09:24:18 +0000 Subject: building libjvm with -Os for space optimization - was : RE: RFR: 8234525: enable link-time section-gc for linux s390x to remove unused code In-Reply-To: <84420dee-9889-b320-253b-f00551a2c9da@oracle.com> References: <3bffe1cf-4567-0cf6-4bfb-ad79bd0b9596@oracle.com> <84420dee-9889-b320-253b-f00551a2c9da@oracle.com> Message-ID: Hi Claes, yeah, that makes sense. > Hopefully we don't have to do one RFE per file.. :-) ?? I should have written set of files or directories or whatever. Thanks for your input. Best regards, Martin > -----Original Message----- > From: Claes Redestad > Sent: Mittwoch, 27. November 2019 23:35 > To: Doerr, Martin ; Baesken, Matthias > ; Erik Joelsson ; > 'build-dev at openjdk.java.net' ; 'hotspot- > dev at openjdk.java.net' > Subject: Re: building libjvm with -Os for space optimization - was : RE: RFR: > 8234525: enable link-time section-gc for linux s390x to remove unused code > > Hi Martin, > > On 2019-11-27 19:03, Doerr, Martin wrote: > > Hi Claes, > > > > that kind of surprises me. I'd expect files which rather benefit from -O3 to > be far less than those which benefit from -Os. > > Most performance critical code lives inside the code cache and is not > dependent on C++ compiler optimizations. > > I'd expect GC code, C2's register allocation and a few runtime files to be the > most performance critical C++ code. > > So the list of files for -Os may become long. > > that might very well be the end result, and once/if we've gone down this > path long enough to see that -O3 becomes the exception, we can re- > examine the default. Changing the default and then trying to recuperate > would be hard/impossible to do incrementally. > > > > > Yeah, I think we should use native profiling information to find out what's > really going on > > > Your idea to change file by file and check for performance regression > makes sense to me, though > Hopefully we don't have to do one RFE per file.. :-) > > /Claes From christoph.langer at sap.com Thu Nov 28 10:21:11 2019 From: christoph.langer at sap.com (Langer, Christoph) Date: Thu, 28 Nov 2019 10:21:11 +0000 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: References: <5FEB52F4-2AA3-4ECD-A05E-5A2504F49984@oracle.com> <04CB4CC0-F5A5-4FBC-9044-9A746204C928@oracle.com> Message-ID: Hi Matthias, > And there?s one last thing what I was thinking about: Is it really necessary > that /cores is writable for the user? Maybe the system can still write a core > into /cores? I?m trying to verify this now on my Catalina MacBook? will let > you know. I checked that - the writability check is required ?? /Christoph From christoph.langer at sap.com Thu Nov 28 10:54:31 2019 From: christoph.langer at sap.com (Langer, Christoph) Date: Thu, 28 Nov 2019 10:54:31 +0000 Subject: 8234397: add OS uptime information to os::print_os_info output In-Reply-To: References: <30E06543-E081-429B-8293-8CA81D1F6870@sap.com> Message-ID: Hi Matthias, I'd like to see the uptime information in hs_err files. I, however, would rather like to see a more readable output like "OS uptime 10 days 3:10". I understand that's some more formatting effort but on the other hand you'd not need floating point calculations. As for os_windows.cpp: Why don't you spend a os::win32::print_uptime_info(st); method to align with the other implementations? Best regards Christoph > -----Original Message----- > From: hotspot-dev On Behalf Of > Baesken, Matthias > Sent: Donnerstag, 28. November 2019 10:02 > To: Schmidt, Lutz ; David Holmes > ; 'hotspot-dev at openjdk.java.net' dev at openjdk.java.net> > Subject: RE: 8234397: add OS uptime information to os::print_os_info output > > Hi Lutz, I adjusted the days calculation, new webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.2/ > > Best regards, Matthias > > > > > Matthias, > > > > your change looks good to me overall. Please note: I'm not a Reviewer! > > > > I feel the urge to complain about one thing, though: > > When calculating the uptime in days, you divide the time retrieved from > the > > system (usually seconds or milliseconds) by a large number. Why do you > > force that number to be a float? I would prefer the denominator to be an > > "int" value. > > > > Rationale: floats (32bits) are very limited in precision, only y6 to 7 decimal > > digits. At least in the windows case, where you obtain milliseconds from > the > > system, your denominator is 86,400,000. At first glance, that does not fit > into > > a float mantissa. What saves you here are the prime factors "2" (10 in total). > > As a result, you only need 17 mantissa bits to represent the denominator. > > > > Thanks, > > Lutz > > > > ?On 25.11.19, 09:06, "hotspot-dev on behalf of Baesken, Matthias" > > dev-bounces at openjdk.java.net on behalf of matthias.baesken at sap.com> > > wrote: > > > > > > > > The comment in the posix code mentions that it doesn't work on > macOS > > but > > > doesn't say anything about Linux. Has it been tested on Solaris? > > > > > > > Hi David, it works on Solaris . > > I think I should adjust the comment (saying macOS AND Linux) . > > > > Best regards, Matthias > > > > > > > > > > > > One example that occurred last week - my colleague Christoph and > me > > > were browsing through an hs_err file of a crash on AIX . > > > > When looking into the hs_err we wanted to know the uptime > because > > > our latest fontconfig - patches (for getting rid of the crash) needed a > > > reboot too to really work . > > > > Unfortunately we could not find the info , and we were > disappointed > > ( > > > then we noticed the crash is from OpenJDK and not our internal JVM ). > > > > > > > > > > > >>> Bug/webrev : > > > >>> https://bugs.openjdk.java.net/browse/JDK-8234397 > > > >>> http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.1/ > > > >> > > > >> Can Linux not use the POSIX version? > > > >> > > > > > > > > Unfortunately the posix code does not give the desired result on > Linux > > (at > > > least on my test machines). > > > > > > The comment in the posix code mentions that it doesn't work on > macOS > > but > > > doesn't say anything about Linux. Has it been tested on Solaris? > > > > > > I'm really unsure about this code and am hoping someone more > > > knowledgeable in this areas can chime in. I'd be less concerned if there > > > was a single POSIX implementation that worked everywhere. :( Though > I > > > have my general concern about adding yet another potential point of > > > failure in the error reporting logic. > > > > > > Thanks, > > > David > > > > > > > Best regards, Matthias > > > > > > From christoph.langer at sap.com Thu Nov 28 11:02:47 2019 From: christoph.langer at sap.com (Langer, Christoph) Date: Thu, 28 Nov 2019 11:02:47 +0000 Subject: RFR: 8234741: enhance os::get_core_path on macOS In-Reply-To: References: <495770f4-122c-436c-6606-9f9dd79d1e90@oracle.com> Message-ID: Hi Matthias, this looks good to me. Please also add os:: to the other call to current_process_id() in line 3787 (the else branch). No need for another webrev, though. /Christoph > -----Original Message----- > From: hotspot-dev On Behalf Of > Baesken, Matthias > Sent: Mittwoch, 27. November 2019 11:09 > To: gerard ziemski > Cc: hotspot-dev developers > Subject: RE: RFR: 8234741: enhance os::get_core_path on macOS > > Hi Gerard, thanks for your input . > > New webrev : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234741.1/ > > > Best regards, Matthias > > > > > > hi Matthias, > > > > I had to look up "kern.corefile" option, of which I was not previously > > aware - it looks like a nice enhancement. > > > > I'd like to suggest a few small cleanups though: > > > > #1 add "os::" prefix to "current_process_id()" call > > #2 restrict the scope of "os::current_process_id()" to only the branch > > of "if" that needs it > > #3 expand on the "tail" comment a bit to explain why it might be needed > > > > Perhaps something like this: > > > > ? char coreinfo[MAX_PATH]; > > ?? size_t sz = sizeof(coreinfo); > > ?? int ret = sysctlbyname("kern.corefile", coreinfo, &sz, NULL, 0); > > ?? if (ret == 0) { > > ???? char *pid_pos = strstr(coreinfo, "%P"); > > ???? const char* tail = (pid_pos != NULL) ? (pid_pos + 2) : ""; // skip > > over the "%P" to preserve any optional custom user pattern (i.e. %N, %U) > > ???? if (pid_pos != NULL) { > > ?????? *pid_pos = '\0'; > > ?????? n = jio_snprintf(buffer, bufferSize, "%s%d%s", coreinfo, > > os::current_process_id(), tail); > > ???? } else { > > ?????? n = jio_snprintf(buffer, bufferSize, "%s", coreinfo); > > ???? } > > ?? } > > > > BTW. I'm glad you agree to remove the unrelated AWT change from this fix > > and let the client team handle it. > > > > From matthias.baesken at sap.com Thu Nov 28 11:09:37 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Thu, 28 Nov 2019 11:09:37 +0000 Subject: RFR: 8234741: enhance os::get_core_path on macOS In-Reply-To: References: <495770f4-122c-436c-6606-9f9dd79d1e90@oracle.com> Message-ID: Thanks for the review. > Please also add os:: to the other call to current_process_id() in line 3787 Okay. Gerard, may I add you as reviewer ? Best regards, Matthias > -----Original Message----- > From: Langer, Christoph > Sent: Donnerstag, 28. November 2019 12:03 > To: Baesken, Matthias ; gerard ziemski > > Cc: hotspot-dev developers > Subject: RE: RFR: 8234741: enhance os::get_core_path on macOS > > Hi Matthias, > > this looks good to me. > > Please also add os:: to the other call to current_process_id() in line 3787 (the > else branch). No need for another webrev, though. > > /Christoph > > > -----Original Message----- > > From: hotspot-dev On Behalf > Of > > Baesken, Matthias > > Sent: Mittwoch, 27. November 2019 11:09 > > To: gerard ziemski > > Cc: hotspot-dev developers > > Subject: RE: RFR: 8234741: enhance os::get_core_path on macOS > > > > Hi Gerard, thanks for your input . > > > > New webrev : > > > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234741.1/ > > > > > > Best regards, Matthias > > > > > > > > > > hi Matthias, > > > > > > I had to look up "kern.corefile" option, of which I was not previously > > > aware - it looks like a nice enhancement. > > > > > > I'd like to suggest a few small cleanups though: > > > > > > #1 add "os::" prefix to "current_process_id()" call > > > #2 restrict the scope of "os::current_process_id()" to only the branch > > > of "if" that needs it > > > #3 expand on the "tail" comment a bit to explain why it might be needed > > > > > > Perhaps something like this: > > > > > > ? char coreinfo[MAX_PATH]; > > > ?? size_t sz = sizeof(coreinfo); > > > ?? int ret = sysctlbyname("kern.corefile", coreinfo, &sz, NULL, 0); > > > ?? if (ret == 0) { > > > ???? char *pid_pos = strstr(coreinfo, "%P"); > > > ???? const char* tail = (pid_pos != NULL) ? (pid_pos + 2) : ""; // skip > > > over the "%P" to preserve any optional custom user pattern (i.e. %N, > %U) > > > ???? if (pid_pos != NULL) { > > > ?????? *pid_pos = '\0'; > > > ?????? n = jio_snprintf(buffer, bufferSize, "%s%d%s", coreinfo, > > > os::current_process_id(), tail); > > > ???? } else { > > > ?????? n = jio_snprintf(buffer, bufferSize, "%s", coreinfo); > > > ???? } > > > ?? } > > > > > > BTW. I'm glad you agree to remove the unrelated AWT change from this > fix > > > and let the client team handle it. > > > > > > From thomas.stuefe at gmail.com Thu Nov 28 12:42:07 2019 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Thu, 28 Nov 2019 13:42:07 +0100 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> Message-ID: Hi Kim, http://cr.openjdk.java.net/~kbarrett/8213415/open.03/src/hotspot/share/utilities/bitMap.hpp.udiff.html + // Limit max_size_in_bits so aligning up to a word never overflows. + static idx_t max_size_in_words() { return raw_to_words_align_down(~idx_t(0)); } + static idx_t max_size_in_bits() { return max_size_in_words() * BitsPerWord; } Could we have better comments? I first thought that max_size_in_words() means bitmap size, only the static specifier ticked me off. So, if I understand this correctly, we need this since we use the same type for bit and word indices and since we have a n:1 relationship between those two the max. bit index is necessary smaller than _MAX? + // Assumes relevant validity checking for bit has already been done. + static idx_t raw_to_words_align_up(idx_t bit) { + return raw_to_words_align_down(bit + (BitsPerWord - 1)); + } Interestingly, this could break should we ever have different types for word- and bit indices. Should we ever want to templatize the underlying types, eg. to support small bitmaps where 8 or 16 bits would be enough to encode a bitmap index. Then it could make sense to have a larger type for word indices than for bit indices and we could overflow here. (Just idle musings.. we have talked about using different types for bit- and word-indexes before. I have worked on a patch for this, but it got snowed under by other work. See: http://cr.openjdk.java.net/~stuefe/webrevs/bitmap-improvements/bitmap-better-types. What do you think, does this still make sense?) + + // Verify size_in_bits does not exceed maximum size. + static void verify_size(idx_t size_in_bits) NOT_DEBUG_RETURN; "maximum size" is unclear. Can you make the comment more precise? Also, could this function be private? + // Verify bit is less than size. + void verify_index(idx_t bit) const NOT_DEBUG_RETURN; + // Verify bit is not greater than size. + void verify_limit(idx_t bit) const NOT_DEBUG_RETURN; + // Verify [beg,end) is a valid range, e.g. beg <= end <= size(). + void verify_range(idx_t beg, idx_t end) const NOT_DEBUG_RETURN; I have a bit of a trouble understanding these variants and how they are used. I believe to understand that verify_limit() is to verify range sizes, since the valid range for a size type is always one larger than that for an index type, right? But it is used to test indices for validity, for example via to_words_align_down() -> word_addr(). Which would mean for a 64bit sized bitmap (1 word size) I can feed an invalid "64" as index to word_addr(), which verify_limit() would accept but would get me the invalid word index "1". I would have expected word_addr() to only return usable word adresses and to assert in this case. I also feel that if I got it right the naming is the wrong way around. I would have expected "verify_size()" to refer to the bitmap object map size, and "verify_limit()" to the type limit (similar to limits.h).. Cheers, Thomas On Wed, Nov 27, 2019 at 1:41 AM Kim Barrett wrote: > > On Jun 11, 2019, at 12:42 PM, Kim Barrett > wrote: > > > >> On Jun 10, 2019, at 9:14 PM, Kim Barrett > wrote: > >> new webrevs: > >> full: http://cr.openjdk.java.net/~kbarrett/8213415/open.01/ > >> incr: http://cr.openjdk.java.net/~kbarrett/8213415/open.01.inc/ > > > > Stefan and I have been talking about this offline. We have some ideas > for further changes in > > a slightly different direction, so no point in anyone else reviewing the > open.01 changes right now > > (or maybe ever). > > Finally returning to this. Stefan Karlsson and Thomas Shatzl had some > offline feedback on earlier versions that led to some rethinking and > rework. This included an attempt to be a little more consistent with > nomenclature. There are still some lingering naming issues, which > might be worth fixing some other time. > > The basic approach hasn't changed though. From the original RFR: > > Constructing a BitMap now ensures the size is such that rounding it up > to a word boundary won't overflow. This is the new max_size_in_bits() > value. This lets us add some asserts and otherwise tidy things up in > some places by making use of that information. > > This engendered some changes to ParallelGC's ParMarkBitMap. It no > longer uses the obsolete BitMap::word_align_up, instead having its own > internal helper for aligning range ends that knows about invariants in > ParMarkBitMap. > > CR: > https://bugs.openjdk.java.net/browse/JDK-8213415 > > New webrev: > http://cr.openjdk.java.net/~kbarrett/8213415/open.03/ > (No incremental webrev; it wouldn't help that much for BitMap changes, > and there have been several intervening months since the last one.) > > Testing: > mach5 tier1-5 > > From robbin.ehn at oracle.com Thu Nov 28 13:21:29 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Thu, 28 Nov 2019 14:21:29 +0100 Subject: RFR: 8234742: Improve handshake logging In-Reply-To: <31e885e2-1611-d40b-39c4-04c7cb74118b@oracle.com> References: <31e700c7-d2fe-2977-e1e2-13e3cc10927d@oracle.com> <31e885e2-1611-d40b-39c4-04c7cb74118b@oracle.com> Message-ID: <0cceccb0-899f-7f32-9aae-317339038660@oracle.com> Thanks David! /Robbin On 2019-11-28 06:31, David Holmes wrote: > Hi Robbin, > > Updates all seem fine. > > Thanks, > David > > On 28/11/2019 1:51 am, Robbin Ehn wrote: >> Hi David, thanks for having a look! >> >> On 11/27/19 4:25 AM, David Holmes wrote: >>> Hi Robbin, >>> >>> On 26/11/2019 11:06 pm, Robbin Ehn wrote: >>>> Hi, >>>> >>>> Here is the logging part separately: >>>> http://cr.openjdk.java.net/~rehn/8234742/v2/full/webrev/index.html >>>> >>>> It contains one additional change from the first version: >>>> ????? if (number_of_threads_issued < 1) { >>>> -????? log_debug(handshake)("No threads to handshake."); >>>> +????? log_handshake_info(start_time_ns, _op->name(), 0, 0, " (no threads)"); >>>> ??????? return; >>>> ????? } >>>> >>>> Passes t1-3. >>>> >>>> So this goes on top of 8234796: >>>> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >>> >>> src/hotspot/share/runtime/handshake.hpp >>> >>> I was surprised that "process_by_vmthread" doesn't really mean that. Perhaps >>> rename to try_process_by_vmThread? >> >> I did, but also renamed: >> bool handshake_process_by_vmthread() in thread.hpp >> >>> >>> --- >>> >>> src/hotspot/share/runtime/handshake.cpp >>> >>> 91???? log_info(handshake)("Handshake \"%s\", Targeted threads: %d, Executed >>> by targeted threads: %d, Total completion time: " JLONG_FORMAT " ns%s", >>> >>> Probably better to end with " ns, %s" so you don't have to remember to start >>> the 'extra' string with a space each time. >> >> Fixed, but I didn't like the ending "," when the string was empty, so I did a >> different fix! >> >>> >>> 168???? log_trace(handshake)("Threads signaled, begin processing blocked >>> threads by VMThtread") >>> >>> Existing typo: VMThtread >> >> Fixed! >> >>> >>> Otherwise seems okay. >>> >> >> v3 which is also rebased on latest 8234796: >> Inc: >> http://cr.openjdk.java.net/~rehn/8234742/v3/inc/webrev/index.html >> Full: >> http://cr.openjdk.java.net/~rehn/8234742/v3/full/webrev/index.html >> >> t1-3 >> >> Thanks, Robbin >> >>> Thanks, >>> David >>> ----- >>> >>>> Thanks, Robbin >>>> >>>> On 11/25/19 5:33 PM, Robbin Ehn wrote: >>>>> Hi all, please review. >>>>> >>>>> There is little useful information in the handshaking logs. >>>>> This changes the handshakes logs similar to safepoint logs, so the basic >>>>> need of >>>>> what handshake operation and how long it took easily can be tracked. >>>>> Also the per thread log is a bit enhanced. >>>>> >>>>> The refactoring using HandshakeOperation instead of a ThreadClosure is not >>>>> merely for this change. Other changes in the pipeline also require a more >>>>> complex HandshakeOperation. >>>>> >>>>> Issue: >>>>> https://bugs.openjdk.java.net/browse/JDK-8234742 >>>>> Changeset: >>>>> http://cr.openjdk.java.net/~rehn/8234742/full/webrev/ >>>>> >>>>> Passes t1-3. >>>>> >>>>> Thanks, Robbin >>>>> >>>>> Examples: >>>>> -Xlog:handshake,safepoint >>>>> >>>>> [7.148s][info][safepoint] Safepoint "ZMarkStart", Time since last: >>>>> 381873579 ns, Reaching safepoint: 451132 ns, At safepoint: 491202 ns, >>>>> Total: 942334 ns >>>>> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >>>>> threads: 25, Executed by targeted threads: 8, Total completion time: 46884 ns >>>>> [7.151s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >>>>> threads: 25, Executed by targeted threads: 10, Total completion time: 94547 ns >>>>> [7.152s][info][handshake] Handshake "ZMarkFlushAndFreeStacks", Targeted >>>>> threads: 25, Executed by targeted threads: 10, Total completion time: 33545 ns >>>>> [7.154s][info][safepoint] Safepoint "ZMarkEnd", Time since last: 4697901 >>>>> ns, Reaching safepoint: 218800 ns, At safepoint: 1462059 ns, Total: 1680859 ns >>>>> [7.156s][info][handshake] Handshake "ZRendezvous", Targeted threads: 25, >>>>> Executed by targeted threads: 10, Total completion time: 37291 ns >>>>> [7.157s][info][safepoint] Safepoint "ZVerify", Time since last: 2201206 ns, >>>>> Reaching safepoint: 295463 ns, At safepoint: 928077 ns, Total: 1223540 ns >>>>> [7.161s][info][safepoint] Safepoint "ZRelocateStart", Time since last: >>>>> 3161645 ns, Reaching safepoint: 206278 ns, At safepoint: 357284 ns, Total: >>>>> 563562 ns >>>>> [8.162s][info][safepoint] Safepoint "Cleanup", Time since last: 1000123769 >>>>> ns, Reaching safepoint: 526489 ns, At safepoint: 23345 ns, Total: 549834 ns >>>>> [8.182s][info][handshake] Handshake "RevokeOneBias", Targeted threads: 1, >>>>> Executed by targeted threads: 0, Total completion time: 41322 ns >>>>> >>>>> -Xlog:handshake*=trace >>>>> >>>>> [1.259s][trace][handshake ] Threads signaled, begin processing blocked >>>>> threads by VMThtread >>>>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>>> 0x00007f2594022800, is_vm_thread: true, completed in 487 ns >>>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>>> 0x00007f259459e000, is_vm_thread: false, completed in 1233 ns >>>>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>>> 0x00007f25945a0000, is_vm_thread: false, completed in 669 ns >>>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>>> 0x00007f259428a800, is_vm_thread: true, completed in 462 ns >>>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>>> 0x00007f25945b3800, is_vm_thread: false, completed in 574 ns >>>>> ... >>>>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>>> 0x00007f25945b6000, is_vm_thread: true, completed in 100 ns >>>>> [1.259s][trace][handshake ] Processing handshake by VMThtread >>>>> [1.259s][debug][handshake,task ] Operation: ZRendezvous for thread >>>>> 0x00007f25945b7800, is_vm_thread: true, completed in 103 ns >>>>> [1.260s][info ][handshake ] Handshake "ZRendezvous", Targeted threads: 28, >>>>> Executed by targeted threads: 4, Total completion time: 629534 ns >>>>> [1.260s][debug][handshake,task ] Operation: ZRendezvous for thread >>>>> 0x00007f25945a3800, is_vm_thread: false, completed in 608 ns From robbin.ehn at oracle.com Thu Nov 28 14:29:50 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Thu, 28 Nov 2019 15:29:50 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: <9dfd8c82-a7b2-fe4d-4631-31d0d5b29ff0@oracle.com> References: <1ff4ff37-c65e-43e7-a845-bc6ce06750f0@oracle.com> <9dfd8c82-a7b2-fe4d-4631-31d0d5b29ff0@oracle.com> Message-ID: <5e7f1ea4-147b-a7cc-2aed-7993adcb6cfb@oracle.com> Thanks David! Since I had no compile issues, fixing includes for the ThreadClosure move slipped my mind. Inc: http://cr.openjdk.java.net/~rehn/8234796/v3/inc/webrev/ Full: http://cr.openjdk.java.net/~rehn/8234796/v3/full/webrev/ Built fastdebug and release for x64 win/lin/osx, aarch64, ppc, solaris-sprac. And without precompiled header locally. arm32 and x86 have prior build issues and do not compile. Thanks, Robbin On 2019-11-28 07:21, David Holmes wrote: > Hi Robbin, > > On 28/11/2019 1:25 am, Robbin Ehn wrote: >> Hi all, please review. >> >> Here is the result after Per's suggestion: >> http://cr.openjdk.java.net/~rehn/8234796/v2/full/webrev/index.html >> (incremental made no sense) >> >> Due to circular dependency between thread.hpp and handshake.hpp, I moved the >> ThreadClosure to iterator.hpp, as was suggested offline. > > That all looks good to me! Thanks for splitting these up. > > Thanks, > David > ----- > >> Passes t1-3 >> >> Thanks, Robbin >> >> On 11/26/19 2:07 PM, Robbin Ehn wrote: >>> Hi all, please review. >>> >>> Issue: >>> https://bugs.openjdk.java.net/browse/JDK-8234796 >>> Code: >>> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >>> >>> The handshake code needs more information about the handshake operation. >>> We change type from ThreadClosure to HandshakeOperation in Handshake::execute. >>> This enables us to add more details to the HandshakeOperation as needed going >>> forward. >>> >>> Tested t1 and t1-3 together with the logging improvements in 8234742. >>> >>> It was requested that "HandshakeOperation()" would take the name instead >>> having "virtual const char* name();". Which is in this patch. >>> >>> Thanks, Robbin From robbin.ehn at oracle.com Thu Nov 28 14:30:46 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Thu, 28 Nov 2019 15:30:46 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: <4b7d0895-f297-1c71-753c-55c42456078e@oracle.com> References: <1ff4ff37-c65e-43e7-a845-bc6ce06750f0@oracle.com> <4b7d0895-f297-1c71-753c-55c42456078e@oracle.com> Message-ID: Thanks Per. But I forgot about some include changes due to ThreadClosure move, please see mail to David. Thanks, Robbin On 2019-11-28 08:23, Per Liden wrote: > On 11/27/19 4:25 PM, Robbin Ehn wrote: >> Hi all, please review. >> >> Here is the result after Per's suggestion: >> http://cr.openjdk.java.net/~rehn/8234796/v2/full/webrev/index.html >> (incremental made no sense) >> >> Due to circular dependency between thread.hpp and handshake.hpp, I moved the >> ThreadClosure to iterator.hpp, as was suggested offline. > > Thanks for making that change, Robbin! Looks good to me. > > cheers, > /Per > >> >> Passes t1-3 >> >> Thanks, Robbin >> >> On 11/26/19 2:07 PM, Robbin Ehn wrote: >>> Hi all, please review. >>> >>> Issue: >>> https://bugs.openjdk.java.net/browse/JDK-8234796 >>> Code: >>> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >>> >>> The handshake code needs more information about the handshake operation. >>> We change type from ThreadClosure to HandshakeOperation in Handshake::execute. >>> This enables us to add more details to the HandshakeOperation as needed going >>> forward. >>> >>> Tested t1 and t1-3 together with the logging improvements in 8234742. >>> >>> It was requested that "HandshakeOperation()" would take the name instead >>> having "virtual const char* name();". Which is in this patch. >>> >>> Thanks, Robbin From matthias.baesken at sap.com Thu Nov 28 15:15:52 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Thu, 28 Nov 2019 15:15:52 +0000 Subject: 8234397: add OS uptime information to os::print_os_info output In-Reply-To: References: <30E06543-E081-429B-8293-8CA81D1F6870@sap.com> Message-ID: > > Hi Matthias, > > I'd like to see the uptime information in hs_err files. > > I, however, would rather like to see a more readable output like "OS uptime > 10 days 3:10". I understand that's some more formatting effort but on the > other hand you'd not need floating point calculations. > Is there already some function available that does the beautification ? Otherwise the benefit is a bit small just for the hs_err output . > As for os_windows.cpp: Why don't you spend a > os::win32::print_uptime_info(st); method to align with the other > implementations? > Ok why not , could do so ... Regards, Matthias From adinn at redhat.com Thu Nov 28 15:40:06 2019 From: adinn at redhat.com (Andrew Dinn) Date: Thu, 28 Nov 2019 15:40:06 +0000 Subject: Question on "JEP: JVMCI based JIT Compiler pre-compiled as shared library" In-Reply-To: <4c28f33e-a72e-b3cd-87de-f6266d9caae2@oracle.com> References: <4c28f33e-a72e-b3cd-87de-f6266d9caae2@oracle.com> Message-ID: <5b917002-fdbf-6f25-cf7a-79890fab97b2@redhat.com> Hi Vladimir, On 22/11/2019 18:23, Vladimir Kozlov wrote: > As you remember during this JVMLS we talked about our plan to transition > to Graal from C2 in a future. And using AOT'ed (SVM'ed) Graal (libgraal) > is important part of this transition, 8220623 is part of that. Most work > is done by GraalVM group in Oracle. They just released 19.3 version of > GraalVM that is based on JDK 11 and using libgraal by default which use > JVMCI changes from 8220623. I am not sure that last statement is 100% correct :-) It depends what you mean by 'based' on jdk11. The latest GraalVM relies on a specific downstream jdk11 tree maintained by Oracle. GraalVM users have to download a build derived from that tree in order to be able to use the Graal JIT and Substrate native image generator to build native images and shared libraries including, I believe, libgraal. I understand that the changes made to that downstream repo are public i.e. Oracle's GraalVM team have published sources for this tree which include the relevant backports of upstream patches. Red Hat as maintainers of jdk11u would very much like to correct that situation (as, I believe, would the GraalVM team). We have a list of all the JIRAs whose patches have been backported. Is there any reason you are aware of for the jdk11u maintainers not to backport the relevant patches from the jdk dev tree to jdk11u? > On OpenJDK side we plan to release libgraal EA based on Metropolis > repository, as I said during JVMLS. This is tracked by RFE [1]. RFE's > description is outdated since Metropolis repo is based on JDK 14 now and > I need to update it. There is also Graal's PR, Bob V. is working on, to > adjust upstream Graal/SVM code to API changes done in JDK 14. That's very good news. I'm very much looking forward to seeing Metropolis acquire both a JIT and a VM implemented in Java. regards, Andrew Dinn ----------- From igor.ignatyev at oracle.com Thu Nov 28 16:36:10 2019 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 28 Nov 2019 08:36:10 -0800 Subject: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 In-Reply-To: References: <5FEB52F4-2AA3-4ECD-A05E-5A2504F49984@oracle.com> <04CB4CC0-F5A5-4FBC-9044-9A746204C928@oracle.com> Message-ID: Hi Christoph, both your suggestions sound good to me. -- Igor > On Nov 28, 2019, at 12:55 AM, Langer, Christoph wrote: > > Hi Matthias, > > First of all, thanks for tackling this issue which pops up with MacOS 10.15. > > In the test, however, I think we should not/don?t need to query for the explicit MacOS version. Just do the writability check and if it fails, throw the SkippedException. > > As for the message text, I?d prefer: ?Directory \?? + coresDir +?\? is not writable?. I think that?s both, explicit and concise. @Igor, what do you think? > > And there?s one last thing what I was thinking about: Is it really necessary that /cores is writable for the user? Maybe the system can still write a core into /cores? I?m trying to verify this now on my Catalina MacBook? will let you know. > > Cheers > Christoph > > From: Baesken, Matthias > Sent: Donnerstag, 28. November 2019 09:33 > To: Igor Ignatyev > Cc: hotspot-dev at openjdk.java.net; Langer, Christoph > Subject: RE: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 > > so I'd suggest to change it to something like '/cores is not writable'. > Hi Igor, done : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.2/ > > > May I add you as reviewer now ? > > Best regards, Matthias > > > From: Igor Ignatyev > > Sent: Mittwoch, 27. November 2019 19:15 > To: Baesken, Matthias > > Cc: hotspot-dev at openjdk.java.net ; Langer, Christoph > > Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 > > SkippedException's message is used by jtreg as a reason why a test was skipped, so I'd suggest to change it to something like '/cores is not writable'. > > I'm also not sure if we really need to check version string at L#123 now, but it's fine if you decide to keep it. > > > Thanks. > -- Igor > > > On Nov 27, 2019, at 6:30 AM, Baesken, Matthias > wrote: > > ? I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property > > > Hello, I now changed the test to throw a SkippedException : > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.1/ > > > Best regards, Matthias > > > From: Igor Ignatyev > > Sent: Dienstag, 26. November 2019 18:35 > To: Baesken, Matthias > > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 > > for now, we have only test/jtreg-ext/requires/VMProps.java which sets @requires properties; although 'env.core.accessible' and similar properties aren't vm-related properties per-se so VMProps isn't the best place to set them. however VMProps already sets a few non vm properties, and given introduction of another class to set these properties is a bit of hassle, I think it's fine to just add `env.core.accessible` to VMProps (and later rename VMProps to be more appropriate for all kinds of properties). > > there is one thing which you should be aware than add any new requires properties, jtreg runs VMProps once for *every* execution even if none of "target" tests use @requires, in other words, all test executions pay price of setting all properties, therefore it should be costly. > > -- Igor > > > > On Nov 26, 2019, at 2:07 AM, Baesken, Matthias > wrote: > > > or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. > > Hello Igor, that sounds interesting. > > Who would set the say env.core.accessible property accordingly (so that in our environment on 10.15 it would be false) ? > > Best regards, Matthias > > > From: Igor Ignatyev > > Sent: Montag, 25. November 2019 21:10 > To: Baesken, Matthias > > Cc: hotspot-dev at openjdk.java.net > Subject: Re: RFR: [XS] 8234625: hs test serviceability/sa/ClhsdbCDSCore.java fails on macOS 10.15 > > Hi Matthias, > > your solution will hide the fact that the coverage from this test will be missed on macos 10.15+. I'd recommend you to use jtreg.SkippedException to signal that the test can't be run or to introduce new @requires property, say `env.core.accessible`, which is true iif core dumping is enabled and dumped cores can be accessed from the test code. > > -- Igor > > > > > On Nov 25, 2019, at 1:58 AM, Baesken, Matthias > wrote: > > Hello, the test > serviceability/sa/ClhsdbCDSCore.java > fails on macOS 10.15 . > exception : > java.lang.Error: cores is not a directory or does not have write permissions > at ClhsdbCDSCore.main(ClhsdbCDSCore.java:115) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127) > at java.base/java.lang.Thread.run(Thread.java:833) > > Looks like the test checks that directory /cores is writable : > File coresDir = new File("/cores"); > if (!coresDir.isDirectory() || !coresDir.canWrite()) { ... // fail > However on macOS 10.15 /cores is not writable any more (at least for most users, including our test user). > So the test fails. > > My change adjusts the test, so that it gives a clearer error message, and returns gracefully in case we are running on macOS 10.15 and notice a non-writeable /cures directory . > > > > Bug/webrev : > > https://bugs.openjdk.java.net/browse/JDK-8234625 > > http://cr.openjdk.java.net/~mbaesken/webrevs/8234625.0/ > > > Best regards, Matthias From per.liden at oracle.com Thu Nov 28 16:55:12 2019 From: per.liden at oracle.com (Per Liden) Date: Thu, 28 Nov 2019 17:55:12 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: <5e7f1ea4-147b-a7cc-2aed-7993adcb6cfb@oracle.com> References: <5e7f1ea4-147b-a7cc-2aed-7993adcb6cfb@oracle.com> Message-ID: Still looks good! /Per > On 28 Nov 2019, at 15:29, Robbin Ehn wrote: > > ?Thanks David! > > Since I had no compile issues, fixing includes for the ThreadClosure move slipped my mind. > > Inc: > http://cr.openjdk.java.net/~rehn/8234796/v3/inc/webrev/ > Full: > http://cr.openjdk.java.net/~rehn/8234796/v3/full/webrev/ > > Built fastdebug and release for x64 win/lin/osx, aarch64, ppc, solaris-sprac. > And without precompiled header locally. > arm32 and x86 have prior build issues and do not compile. > > Thanks, Robbin > >> On 2019-11-28 07:21, David Holmes wrote: >> Hi Robbin, >>> On 28/11/2019 1:25 am, Robbin Ehn wrote: >>> Hi all, please review. >>> >>> Here is the result after Per's suggestion: >>> http://cr.openjdk.java.net/~rehn/8234796/v2/full/webrev/index.html >>> (incremental made no sense) >>> >>> Due to circular dependency between thread.hpp and handshake.hpp, I moved the ThreadClosure to iterator.hpp, as was suggested offline. >> That all looks good to me! Thanks for splitting these up. >> Thanks, >> David >> ----- >>> Passes t1-3 >>> >>> Thanks, Robbin >>> >>> On 11/26/19 2:07 PM, Robbin Ehn wrote: >>>> Hi all, please review. >>>> >>>> Issue: >>>> https://bugs.openjdk.java.net/browse/JDK-8234796 >>>> Code: >>>> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >>>> >>>> The handshake code needs more information about the handshake operation. >>>> We change type from ThreadClosure to HandshakeOperation in Handshake::execute. >>>> This enables us to add more details to the HandshakeOperation as needed going forward. >>>> >>>> Tested t1 and t1-3 together with the logging improvements in 8234742. >>>> >>>> It was requested that "HandshakeOperation()" would take the name instead having "virtual const char* name();". Which is in this patch. >>>> >>>> Thanks, Robbin From kim.barrett at oracle.com Thu Nov 28 22:01:34 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 28 Nov 2019 17:01:34 -0500 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> Message-ID: <4CD1F762-EAF0-497B-A307-2E9CABFCDD14@oracle.com> > On Nov 28, 2019, at 7:42 AM, Thomas St?fe wrote: > > Hi Kim, > > http://cr.openjdk.java.net/~kbarrett/8213415/open.03/src/hotspot/share/utilities/bitMap.hpp.udiff.html > > + // Limit max_size_in_bits so aligning up to a word never overflows. > + static idx_t max_size_in_words() { return raw_to_words_align_down(~idx_t(0)); } > + static idx_t max_size_in_bits() { return max_size_in_words() * BitsPerWord; } > > Could we have better comments? I first thought that max_size_in_words() means bitmap size, only the static specifier ticked me off. I'm not sure I understand your issue. I modified the comment a bit to address what I think you are saying, but I'm not sure I've covered it. > So, if I understand this correctly, we need this since we use the same type for bit and word indices and since we have a n:1 relationship between those two the max. bit index is necessary smaller than _MAX? The point is to avoid overflow of the type used for bit indices when aligning a value up to a multiple of the word size. This doesn't really have anything to do with using the same types for bit indices and word indices, though using different types might affect the details of some of the calculations, and the range for the word type would need to be suitably chosen to accomodate the bit range. > + // Assumes relevant validity checking for bit has already been done. > + static idx_t raw_to_words_align_up(idx_t bit) { > + return raw_to_words_align_down(bit + (BitsPerWord - 1)); > + } > > Interestingly, this could break should we ever have different types for word- and bit indices. Should we ever want to templatize the underlying types, eg. to support small bitmaps where 8 or 16 bits would be enough to encode a bitmap index. Then it could make sense to have a larger type for word indices than for bit indices and we could overflow here. I don't understand why one would need a larger type for word indices? The range for word indices is necessarily smaller than the range for bit indices. > (Just idle musings.. we have talked about using different types for bit- and word-indexes before. I have worked on a patch for this, but it got snowed under by other work. See: http://cr.openjdk.java.net/~stuefe/webrevs/bitmap-improvements/bitmap-better-types. What do you think, does this still make sense?) Making the types different obviously has some benefits for type safety, if implemented in a way that actually provides a distinction. Just using different typedefs for the same underlying type doesn't accomplish that though. I think a strong type for words could be hidden in the private implementation, and have thought about doing something like that, but not for this change. > + > + // Verify size_in_bits does not exceed maximum size. > + static void verify_size(idx_t size_in_bits) NOT_DEBUG_RETURN; > > "maximum size" is unclear. Can you make the comment more precise? I modified the comments for the verify functions to be more explicit. > Also, could this function be private? I put the new verify functions near the pre-existing ones. There are a lot of protected functions in this class that seem like they ought to be private. I don't want to address that in this change. > + // Verify bit is less than size. > + void verify_index(idx_t bit) const NOT_DEBUG_RETURN; > + // Verify bit is not greater than size. > + void verify_limit(idx_t bit) const NOT_DEBUG_RETURN; > + // Verify [beg,end) is a valid range, e.g. beg <= end <= size(). > + void verify_range(idx_t beg, idx_t end) const NOT_DEBUG_RETURN; > > I have a bit of a trouble understanding these variants and how they are used. > > I believe to understand that verify_limit() is to verify range sizes, since the valid range for a size type is always one larger than that for an index type, right? But it is used to test indices for validity, for example via to_words_align_down() -> word_addr(). Which would mean for a 64bit sized bitmap (1 word size) I can feed an invalid "64" as index to word_addr(), which verify_limit() would accept but would get me the invalid word index "1". I would have expected word_addr() to only return usable word adresses and to assert in this case. verify_limit checks that the argument is a valid index or one-past-the-last designator, e.g. it is a valid iteration limit. "limit" was the preferred term from discussion with tschatzl. There are uses of word_addr() to obtain a one-past-the-last pointer. > I also feel that if I got it right the naming is the wrong way around. I would have expected "verify_size()" to refer to the bitmap object map size, and "verify_limit()" to the type limit (similar to limits.h).. verify_size is about checking the validity of a value that will be used as the size of a bitmap. Maybe the comment changes help? New webrevs: full: https://cr.openjdk.java.net/~kbarrett/8213415/open.04/ incr: https://cr.openjdk.java.net/~kbarrett/8213415/open.04.inc/ From thomas.stuefe at gmail.com Fri Nov 29 08:03:40 2019 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 29 Nov 2019 09:03:40 +0100 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: <4CD1F762-EAF0-497B-A307-2E9CABFCDD14@oracle.com> References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> <4CD1F762-EAF0-497B-A307-2E9CABFCDD14@oracle.com> Message-ID: Hi Kim, On Thu, Nov 28, 2019 at 11:01 PM Kim Barrett wrote: > > On Nov 28, 2019, at 7:42 AM, Thomas St?fe > wrote: > > > > Hi Kim, > > > > > http://cr.openjdk.java.net/~kbarrett/8213415/open.03/src/hotspot/share/utilities/bitMap.hpp.udiff.html > > > > + // Limit max_size_in_bits so aligning up to a word never overflows. > > + static idx_t max_size_in_words() { return > raw_to_words_align_down(~idx_t(0)); } > > + static idx_t max_size_in_bits() { return max_size_in_words() * > BitsPerWord; } > > > > Could we have better comments? I first thought that max_size_in_words() > means bitmap size, only the static specifier ticked me off. > > I'm not sure I understand your issue. I modified the comment a bit to > address what I think you are saying, but I'm not sure I've covered it. > > I meant that you use "size" with different meanings - throughout most of the class size is the size, in bits, of the BitMap object (_size). Here it is something different, not sure yet what exactly, see below. > > So, if I understand this correctly, we need this since we use the same > type for bit and word indices and since we have a n:1 relationship between > those two the max. bit index is necessary smaller than _MAX? > > The point is to avoid overflow of the type used for bit indices when > aligning a value up to a multiple of the word size. This doesn't > really have anything to do with using the same types for bit indices > and word indices, though using different types might affect the > details of some of the calculations, and the range for the word type > would need to be suitably chosen to accomodate the bit range. > > I still in the dark. In your current version max_size_in_words() and max_size_in_bits() there is an overflow, since both bit- and word indexes use the same type. With 64bit I come to: FFFFFFFF.FFFFFFC0 for max word index, 3FFFFFF.FFFFFFFF for max bit index. For 64bit types this does not matter much, but if we ever were to use smaller types, e.g. uint16_t, it would matter. Also, I find it surprising that max bit index is smaller than max word index. Side note, I was interested in using smaller types because long term I would like to have a BitMap class in cases where I today use little hand written bitmaps. As it is now, BitMap has a pointer and a size, which makes it a 16byte structure on 64 bit, which is rather fat. The indirection is also often unwanted. I would like to have a BitMap class which contains directly the data as member(s), e.g. one where it just has a 16bit word or, maybe, an array of multiple words. That would make this structure a lot smaller and better suited to be included in space sensitive structures. > + // Assumes relevant validity checking for bit has already been done. > > + static idx_t raw_to_words_align_up(idx_t bit) { > > + return raw_to_words_align_down(bit + (BitsPerWord - 1)); > > + } > > > > Interestingly, this could break should we ever have different types for > word- and bit indices. Should we ever want to templatize the underlying > types, eg. to support small bitmaps where 8 or 16 bits would be enough to > encode a bitmap index. Then it could make sense to have a larger type for > word indices than for bit indices and we could overflow here. > > I don't understand why one would need a larger type for word indices? > The range for word indices is necessarily smaller than the range for > bit indices. > > > (Just idle musings.. we have talked about using different types for bit- > and word-indexes before. I have worked on a patch for this, but it got > snowed under by other work. See: > http://cr.openjdk.java.net/~stuefe/webrevs/bitmap-improvements/bitmap-better-types. > What do you think, does this still make sense?) > > Making the types different obviously has some benefits for type > safety, if implemented in a way that actually provides a distinction. > Just using different typedefs for the same underlying type doesn't > accomplish that though. I think a strong type for words could be > hidden in the private implementation, and have thought about doing > something like that, but not for this change. > > Oh sure. This was just the first draft. My idea was - if only for test reasons - to use a class type which wraps around a numeric and defines +/- operations and assignments. I wonder though whether there is a simpler way to make the compiler complain about assignments between word- and bit indices. But even with the same underlying typedef, using different types for word- and bit indexes would make the code more readable and clearer. > > + > > + // Verify size_in_bits does not exceed maximum size. > > + static void verify_size(idx_t size_in_bits) NOT_DEBUG_RETURN; > > > > "maximum size" is unclear. Can you make the comment more precise? > > I modified the comments for the verify functions to be more explicit. > > Thank you > > Also, could this function be private? > > I put the new verify functions near the pre-existing ones. There are a > lot of protected functions in this class that seem like they ought to > be private. I don't want to address that in this change. > > > + // Verify bit is less than size. > > + void verify_index(idx_t bit) const NOT_DEBUG_RETURN; > > + // Verify bit is not greater than size. > > + void verify_limit(idx_t bit) const NOT_DEBUG_RETURN; > > + // Verify [beg,end) is a valid range, e.g. beg <= end <= size(). > > + void verify_range(idx_t beg, idx_t end) const NOT_DEBUG_RETURN; > > > > I have a bit of a trouble understanding these variants and how they are > used. > > > > I believe to understand that verify_limit() is to verify range sizes, > since the valid range for a size type is always one larger than that for an > index type, right? But it is used to test indices for validity, for example > via to_words_align_down() -> word_addr(). Which would mean for a 64bit > sized bitmap (1 word size) I can feed an invalid "64" as index to > word_addr(), which verify_limit() would accept but would get me the invalid > word index "1". I would have expected word_addr() to only return usable > word adresses and to assert in this case. > > verify_limit checks that the argument is a valid index or > one-past-the-last designator, e.g. it is a valid iteration limit. > "limit" was the preferred term from discussion with tschatzl. > > Okay, with "iteration limit" in mind I understand the naming better. > There are uses of word_addr() to obtain a one-past-the-last pointer. > > > I also feel that if I got it right the naming is the wrong way around. I > would have expected "verify_size()" to refer to the bitmap object map size, > and "verify_limit()" to the type limit (similar to limits.h).. > > verify_size is about checking the validity of a value that will be > used as the size of a bitmap. Maybe the comment changes help? > > New webrevs: > full: https://cr.openjdk.java.net/~kbarrett/8213415/open.04/ > incr: https://cr.openjdk.java.net/~kbarrett/8213415/open.04.inc/ > > > The comments look better, thanks. Cheers, Thomas From sgehwolf at redhat.com Fri Nov 29 09:04:00 2019 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Fri, 29 Nov 2019 10:04:00 +0100 Subject: [PING2!] RFR: 8230305: Cgroups v2: Container awareness In-Reply-To: <5eec97c04d86562346243c1db3832e86e13697a1.camel@redhat.com> References: <072f66ee8c44034831b4e38f6470da4bff6edd07.camel@redhat.com> <7540a208e306ab957032b18178a53c6afa105d33.camel@redhat.com> <5eec97c04d86562346243c1db3832e86e13697a1.camel@redhat.com> Message-ID: On Fri, 2019-11-15 at 17:56 +0100, Severin Gehwolf wrote: > On Fri, 2019-11-08 at 15:21 +0100, Severin Gehwolf wrote: > > Hi Bob, > > > > On Wed, 2019-11-06 at 10:47 +0100, Severin Gehwolf wrote: > > > On Tue, 2019-11-05 at 16:54 -0500, Bob Vandette wrote: > > > > Severin, > > > > > > > > Thanks for taking on this cgroup v2 improvement. > > > > > > > > In general I like the implementation and the refactoring. The CachedMetric class is nice. > > > > We can add any metric we want to cache in a more general way. > > > > > > > > Is this the latest version of the webrev? > > > > > > > > http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/03/webrev/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp.html > > > > > > > > It looks like you need to add the caching support for active_processor_count (JDK-8227006). > > [...] > > > I'll do a proper rebase ASAP. > > > > Latest webrev: > > http://cr.openjdk.java.net/~sgehwolf/webrevs/cgroupsv2-hotspot/05/webrev/ > > > > > > I?m not sure it?s worth providing different strings for Unlimited versus Max or Scaled shares. > > > > I?d just try to be compatible with the cgroupv2 output so you don?t have to change the test. > > > > > > OK. Will do. > > > > Unfortunately, there is no way of NOT changing TestCPUAwareness.java as > > it expects CPU Shares to be written to the cgroup filesystem verbatim. > > That's no longer the case for cgroups v2 (at least for crun). Either > > way, most test changes are gone now. > > > > > > I wonder if it?s worth trying to synthesize memory_max_usage_in_bytes() by keeping the highest > > > > value ever returned by the API. > > > > > > Interesting idea. I'll ponder this a bit and get back to you. > > > > This has been implemented. I'm not sure this is correct, though. It > > merely piggy-backs on calls to memory_usage_in_bytes() and keeps the > > high watermark value of that. > > > > Testing passed on F31 with cgroups v2 controllers properly configured > > (podman) and hybrid (legacy hierarchy) with docker/podman. > > > > Thoughts? > > Ping? Anyone willing to review this? It would be nice to make some progress. Thanks, Severin > Metrics work proposed for RFR here: > http://mail.openjdk.java.net/pipermail/core-libs-dev/2019-November/063464.html > > Thanks, > Severin From robbin.ehn at oracle.com Fri Nov 29 09:13:02 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Fri, 29 Nov 2019 10:13:02 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: References: <5e7f1ea4-147b-a7cc-2aed-7993adcb6cfb@oracle.com> Message-ID: Thanks Per! /Robbin On 11/28/19 5:55 PM, Per Liden wrote: > Still looks good! > > /Per > >> On 28 Nov 2019, at 15:29, Robbin Ehn wrote: >> >> ?Thanks David! >> >> Since I had no compile issues, fixing includes for the ThreadClosure move slipped my mind. >> >> Inc: >> http://cr.openjdk.java.net/~rehn/8234796/v3/inc/webrev/ >> Full: >> http://cr.openjdk.java.net/~rehn/8234796/v3/full/webrev/ >> >> Built fastdebug and release for x64 win/lin/osx, aarch64, ppc, solaris-sprac. >> And without precompiled header locally. >> arm32 and x86 have prior build issues and do not compile. >> >> Thanks, Robbin >> >>> On 2019-11-28 07:21, David Holmes wrote: >>> Hi Robbin, >>>> On 28/11/2019 1:25 am, Robbin Ehn wrote: >>>> Hi all, please review. >>>> >>>> Here is the result after Per's suggestion: >>>> http://cr.openjdk.java.net/~rehn/8234796/v2/full/webrev/index.html >>>> (incremental made no sense) >>>> >>>> Due to circular dependency between thread.hpp and handshake.hpp, I moved the ThreadClosure to iterator.hpp, as was suggested offline. >>> That all looks good to me! Thanks for splitting these up. >>> Thanks, >>> David >>> ----- >>>> Passes t1-3 >>>> >>>> Thanks, Robbin >>>> >>>> On 11/26/19 2:07 PM, Robbin Ehn wrote: >>>>> Hi all, please review. >>>>> >>>>> Issue: >>>>> https://bugs.openjdk.java.net/browse/JDK-8234796 >>>>> Code: >>>>> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >>>>> >>>>> The handshake code needs more information about the handshake operation. >>>>> We change type from ThreadClosure to HandshakeOperation in Handshake::execute. >>>>> This enables us to add more details to the HandshakeOperation as needed going forward. >>>>> >>>>> Tested t1 and t1-3 together with the logging improvements in 8234742. >>>>> >>>>> It was requested that "HandshakeOperation()" would take the name instead having "virtual const char* name();". Which is in this patch. >>>>> >>>>> Thanks, Robbin > From robbin.ehn at oracle.com Fri Nov 29 11:52:49 2019 From: robbin.ehn at oracle.com (Robbin Ehn) Date: Fri, 29 Nov 2019 12:52:49 +0100 Subject: RFR: 8234796: Refactor Handshake::execute to take a HandshakeOperation In-Reply-To: <5e7f1ea4-147b-a7cc-2aed-7993adcb6cfb@oracle.com> References: <1ff4ff37-c65e-43e7-a845-bc6ce06750f0@oracle.com> <9dfd8c82-a7b2-fe4d-4631-31d0d5b29ff0@oracle.com> <5e7f1ea4-147b-a7cc-2aed-7993adcb6cfb@oracle.com> Message-ID: <1745284a-545f-7bf5-533a-945d917cfe08@oracle.com> Hi, Shenandoah just added a handshake, here is the additional fix. http://cr.openjdk.java.net/~rehn/8234796/v4/full/ http://cr.openjdk.java.net/~rehn/8234796/v4/inc/ Thanks, Robbin On 11/28/19 3:29 PM, Robbin Ehn wrote: > Thanks David! > > Since I had no compile issues, fixing includes for the ThreadClosure move > slipped my mind. > > Inc: > http://cr.openjdk.java.net/~rehn/8234796/v3/inc/webrev/ > Full: > http://cr.openjdk.java.net/~rehn/8234796/v3/full/webrev/ > > Built fastdebug and release for x64 win/lin/osx, aarch64, ppc, solaris-sprac. > And without precompiled header locally. > arm32 and x86 have prior build issues and do not compile. > > Thanks, Robbin > > On 2019-11-28 07:21, David Holmes wrote: >> Hi Robbin, >> >> On 28/11/2019 1:25 am, Robbin Ehn wrote: >>> Hi all, please review. >>> >>> Here is the result after Per's suggestion: >>> http://cr.openjdk.java.net/~rehn/8234796/v2/full/webrev/index.html >>> (incremental made no sense) >>> >>> Due to circular dependency between thread.hpp and handshake.hpp, I moved the >>> ThreadClosure to iterator.hpp, as was suggested offline. >> >> That all looks good to me! Thanks for splitting these up. >> >> Thanks, >> David >> ----- >> >>> Passes t1-3 >>> >>> Thanks, Robbin >>> >>> On 11/26/19 2:07 PM, Robbin Ehn wrote: >>>> Hi all, please review. >>>> >>>> Issue: >>>> https://bugs.openjdk.java.net/browse/JDK-8234796 >>>> Code: >>>> http://cr.openjdk.java.net/~rehn/8234796/full/webrev/ >>>> >>>> The handshake code needs more information about the handshake operation. >>>> We change type from ThreadClosure to HandshakeOperation in Handshake::execute. >>>> This enables us to add more details to the HandshakeOperation as needed >>>> going forward. >>>> >>>> Tested t1 and t1-3 together with the logging improvements in 8234742. >>>> >>>> It was requested that "HandshakeOperation()" would take the name instead >>>> having "virtual const char* name();". Which is in this patch. >>>> >>>> Thanks, Robbin From adinn at redhat.com Fri Nov 29 11:57:01 2019 From: adinn at redhat.com (Andrew Dinn) Date: Fri, 29 Nov 2019 11:57:01 +0000 Subject: 8233948: AArch64: Incorrect mapping between OptoReg and VMReg for high 64 bits of Vector Register In-Reply-To: References: Message-ID: Hi Joshua, Thanks for looking into this and suggesting the required cleanup. On 15/11/2019 10:29, Joshua Zhu (Arm Technology China) wrote: >> Please review the following patch: >> JBS: https://bugs.openjdk.java.net/browse/JDK-8233948 >> Webrev: http://cr.openjdk.java.net/~jzhu/8233948/webrev.00/ > > Please let me know if any comments. Thanks a lot. I think this is a good start but there is more work to do to clean up method RegisterSaver::save_live_registers defined in file sharedRuntime_aarch64.cpp. It would be good to do that clean up as part of this patch so it is all consistent. So, the first step is to add a couple of extra enum constants in FloatRegisterImpl: 128 class FloatRegisterImpl: public AbstractRegisterImpl { 129 public: 130 enum { 131 number_of_registers = 32, 132 max_slots_per_register = 4, save_slots_per_register = 2, extra_save_slots_per_register = 2 The 2 new tags are needed because sharedRuntime_aarch64.cpp normally only saves 2 slots per register but it occasionally needs to save all 4. The first bit of code in sharedRuntime_aarch64.cpp that needs fixing is this enum: 100 enum layout { 101 fpu_state_off = 0, 102 fpu_state_end = fpu_state_off+FPUStateSizeInWords-1, 103 // The frame sender code expects that rfp will be in 104 // the "natural" place and will override any oopMap 105 // setting for it. We must therefore force the layout 106 // so that it agrees with the frame sender code. 107 r0_off = fpu_state_off+FPUStateSizeInWords, 108 rfp_off = r0_off + 30 * 2, 109 return_off = rfp_off + 2, // slot for return address 110 reg_save_size = return_off + 2}; This information defines the layout of the data normally saved to stack (i.e. 2 slots per fp reg). These values should really be computed using the enum values you added to the definitions for RegisterImpl and FloatRegisterImpl. FPUStateSizeInWords is actually defined in assembler.hpp. It doesn't really need to be there but we put it there to follow the logic for x86 where the amount of saved state is more complicated. The AArch64 definiton at assembler.hpp:607 is this: 607 const int FPUStateSizeInWords = 32 * 2; So, that can now be redefined as 607 const int FPUStateSizeInWords = FloatRegisterImpl::number_of_registers * FloatRegisterImpl::save_slots_per_register; We then need to redefine the code at lines 108 - 110 to use the enum values: 108 rfp_off = r0_off + (RegisterImpl::number_of_registers - 2) * RegisterImpl::max_slots_per_register, 109 return_off = rfp_off + RegisterImpl::max_slots_per_register, // slot for return address 110 reg_save_size = return_off + RegisterImpl::max_slots_per_register}; Finally, we can method edit save_live_registers at the point where it allows space for the extra vector register content. That needs to be updated to use the relevant constants: 116 if (save_vectors) { 117 // Save upper half of vector registers 118 int vect_words = FloatRegisterImpl::number_of_registers * FloatRegisterImpl::extra_save_slots_per_register; 119 additional_frame_words += vect_words; Could you prepare a new webrev with these extra changes in and check it is ok? Also, could you report what testing you did before and after your change (other than checking the dump output). You will probably need to repeat it to ensure these extra changes are ok. regards, Andrew Dinn ----------- Senior Principal Software Engineer Red Hat UK Ltd Registered in England and Wales under Company Registration No. 03798903 Directors: Michael Cunningham, Michael ("Mike") O'Neill From matthias.baesken at sap.com Fri Nov 29 13:58:19 2019 From: matthias.baesken at sap.com (Baesken, Matthias) Date: Fri, 29 Nov 2019 13:58:19 +0000 Subject: 8234397: add OS uptime information to os::print_os_info output In-Reply-To: References: <30E06543-E081-429B-8293-8CA81D1F6870@sap.com> Message-ID: Hi, here is a new webrev, this time with nicer output (see os.cpp) . http://cr.openjdk.java.net/~mbaesken/webrevs/8234397.3/ Best regards, Matthias > -----Original Message----- > From: Langer, Christoph > Sent: Donnerstag, 28. November 2019 11:55 > To: Baesken, Matthias ; Schmidt, Lutz > ; David Holmes ; > 'hotspot-dev at openjdk.java.net' > Subject: RE: 8234397: add OS uptime information to os::print_os_info output > > Hi Matthias, > > I'd like to see the uptime information in hs_err files. > > I, however, would rather like to see a more readable output like "OS uptime > 10 days 3:10". I understand that's some more formatting effort but on the > other hand you'd not need floating point calculations. > > As for os_windows.cpp: Why don't you spend a > os::win32::print_uptime_info(st); method to align with the other > implementations? > > Best regards > Christoph From kim.barrett at oracle.com Fri Nov 29 17:57:09 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 29 Nov 2019 12:57:09 -0500 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> <4CD1F762-EAF0-497B-A307-2E9CABFCDD14@oracle.com> Message-ID: > On Nov 29, 2019, at 3:03 AM, Thomas St?fe wrote: > > Hi Kim, > > On Thu, Nov 28, 2019 at 11:01 PM Kim Barrett wrote: > > On Nov 28, 2019, at 7:42 AM, Thomas St?fe wrote: > > So, if I understand this correctly, we need this since we use the same type for bit and word indices and since we have a n:1 relationship between those two the max. bit index is necessary smaller than _MAX? > > The point is to avoid overflow of the type used for bit indices when > aligning a value up to a multiple of the word size. This doesn't > really have anything to do with using the same types for bit indices > and word indices, though using different types might affect the > details of some of the calculations, and the range for the word type > would need to be suitably chosen to accomodate the bit range. > > > I still in the dark. In your current version max_size_in_words() and max_size_in_bits() there is an overflow, since both bit- and word indexes use the same type. With 64bit I come to: FFFFFFFF.FFFFFFC0 for max word index, 3FFFFFF.FFFFFFFF for max bit index. For 64bit types this does not matter much, but if we ever were to use smaller types, e.g. uint16_t, it would matter. Also, I find it surprising that max bit index is smaller than max word index. You have the values backward. The max bit index is certainly not the smaller, since it is a multiple of max word size. From thomas.stuefe at gmail.com Fri Nov 29 18:34:10 2019 From: thomas.stuefe at gmail.com (=?UTF-8?Q?Thomas_St=C3=BCfe?=) Date: Fri, 29 Nov 2019 19:34:10 +0100 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> <4CD1F762-EAF0-497B-A307-2E9CABFCDD14@oracle.com> Message-ID: On Fri, Nov 29, 2019 at 6:57 PM Kim Barrett wrote: > > On Nov 29, 2019, at 3:03 AM, Thomas St?fe > wrote: > > > > Hi Kim, > > > > On Thu, Nov 28, 2019 at 11:01 PM Kim Barrett > wrote: > > > On Nov 28, 2019, at 7:42 AM, Thomas St?fe > wrote: > > > So, if I understand this correctly, we need this since we use the same > type for bit and word indices and since we have a n:1 relationship between > those two the max. bit index is necessary smaller than _MAX? > > > > The point is to avoid overflow of the type used for bit indices when > > aligning a value up to a multiple of the word size. This doesn't > > really have anything to do with using the same types for bit indices > > and word indices, though using different types might affect the > > details of some of the calculations, and the range for the word type > > would need to be suitably chosen to accomodate the bit range. > > > > > > I still in the dark. In your current version max_size_in_words() and > max_size_in_bits() there is an overflow, since both bit- and word indexes > use the same type. With 64bit I come to: FFFFFFFF.FFFFFFC0 for max word > index, 3FFFFFF.FFFFFFFF for max bit index. For 64bit types this does not > matter much, but if we ever were to use smaller types, e.g. uint16_t, it > would matter. Also, I find it surprising that max bit index is smaller than > max word index. > > You have the values backward. > > The max bit index is certainly not the smaller, since it is a multiple of > max word size. > > Okay, sorry. My fault. Thanks, Thomas From kim.barrett at oracle.com Fri Nov 29 20:37:34 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 29 Nov 2019 15:37:34 -0500 Subject: RFR: 8213415: BitMap::word_index_round_up overflow problems In-Reply-To: References: <24C2BDC9-AC35-4804-95C8-1B59747C1494@oracle.com> <5bd19480-d570-b73d-70fe-10e93ef2ecb8@oracle.com> <3F4F27AC-4812-4E55-99DF-25F0A7BD991D@oracle.com> <2FB27F12-B779-419E-A501-715248CF2309@oracle.com> <4CD1F762-EAF0-497B-A307-2E9CABFCDD14@oracle.com> Message-ID: > On Nov 29, 2019, at 3:03 AM, Thomas St?fe wrote: > > Side note, I was interested in using smaller types because long term I would like to have a BitMap class in cases where I today use little hand written bitmaps. As it is now, BitMap has a pointer and a size, which makes it a 16byte structure on 64 bit, which is rather fat. The indirection is also often unwanted. I would like to have a BitMap class which contains directly the data as member(s), e.g. one where it just has a 16bit word or, maybe, an array of multiple words. That would make this structure a lot smaller and better suited to be included in space sensitive structures. There was some discussion here in Oracle about this sort of thing a while ago. Looking back over that discussion, I don't think we got very far with a statically sized bitmap, just some handwaving. I think the interest is there, but ideas (and time to persue them!) are needed. > Oh sure. This was just the first draft. My idea was - if only for test reasons - to use a class type which wraps around a numeric and defines +/- operations and assignments. I wonder though whether there is a simpler way to make the compiler complain about assignments between word- and bit indices. Yes, some sort of lightweight wrapper class, or perhaps a C++11 enum class without any named enumerators, just the type safety. I have some followup tidying up I want to do after this change; maybe I?ll add this to my list. > But even with the same underlying typedef, using different types for word- and bit indexes would make the code more readable and clearer. Perhaps, though I worry that it might give a false sense of security. From kim.barrett at oracle.com Fri Nov 29 21:45:15 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 29 Nov 2019 16:45:15 -0500 Subject: RFR: 8234779: Provide idiom for declaring classes noncopyable In-Reply-To: References: <233a779d-ae3b-4856-9c44-dd81bfceab6e@oracle.com> <129579A4-01C4-4F62-9582-06BE19CB13C0@oracle.com> Message-ID: > On Nov 28, 2019, at 3:50 AM, Per Liden wrote: > > On 11/27/19 10:49 PM, Kim Barrett wrote: >> That idiom is rather wordy and indirect though. In particular, it >> is generally accompanied by comments indicating that this is to make >> the class noncopyable, or that the declared functions are not defined >> (not always with a reason, so that needs to be inferred). Failure to >> provide such comments means the reader may need to check for a >> definition in order to determine whether that idiom is being used, or >> whether the definitions are just not inline. >> The proposed macro significantly reduces that wordiness. Far more >> importantly, it makes the intent entirely self-evident; there's no >> need for any explanatory comments. > > My objection is that you are effectively moving us _away_ from a well known C++idiom, since people tend to read code before it goes through the pre-processor. Once we have C++11 support we can easily switch over to using "= delete", and anything that was previously ambiguous or needed a comment will become clear, and our code would stay idiomatic. Not really. So far as I can tell, the "well known" and widely used idiom is to derive from boost::noncopyable or something similar. That's pretty succinct and to the point. It's the approach I first saw and used in several other code bases. However, for reasons I mentioned earlier, I never thought it was a good fit for HotSpot. And having since learned about the first member problem, I would no longer recommend that approach at all. When the declared but undefined, or C++11 deleted mechanisms are discussed, often they're proposed as a macro, and there are usually "yuck! macros!" responses to such proposals, often suggesting other approaches like using a base class. Whether a macro is used is an interface and ease of use question. I strongly dislike boilerplate, and this is a fairly extreme example of such. I also like putting repetitive code behind names to make it easier to chunk and understand. From john.r.rose at oracle.com Fri Nov 29 23:04:59 2019 From: john.r.rose at oracle.com (John Rose) Date: Fri, 29 Nov 2019 15:04:59 -0800 Subject: RFR: 8234779: Provide idiom for declaring classes noncopyable In-Reply-To: <129579A4-01C4-4F62-9582-06BE19CB13C0@oracle.com> References: <233a779d-ae3b-4856-9c44-dd81bfceab6e@oracle.com> <129579A4-01C4-4F62-9582-06BE19CB13C0@oracle.com> Message-ID: On Nov 27, 2019, at 1:49 PM, Kim Barrett wrote: > > The proposed macro significantly reduces that wordiness. Far more > importantly, it makes the intent entirely self-evident; there's no > need for any explanatory comments. Or to put it another way, the explanatory comments can be centralized in the header file which defines the macro. And the macro can be given a name which explains the intent. The name and comments can reflect HotSpot-specific ?house rules? (local design rules and conventions). On Nov 29, 2019, at 1:45 PM, Kim Barrett wrote: > ? I also like putting repetitive code behind names to make it > easier to chunk and understand. +1 A well-chosen macro name can be easier to read than a chunk of boilerplate. This is especially applicable to us since C++ boilerplate evolves over time, and as a highly portable system we don?t have the ability to track one particular dialect of C++. But even if we did, we'd still have complex ?house rules? to enforce and document, and macros play a role there. I don?t think that learning the ?house macros? for HotSpot is an excessive burden for people learning to work on HotSpot. Kim?s proposal seems to be yet another one of these macros. Thanks, Kim and Per, for marshaling the arguments pro and con. ? John From ioi.lam at oracle.com Sat Nov 30 01:13:37 2019 From: ioi.lam at oracle.com (Ioi Lam) Date: Fri, 29 Nov 2019 17:13:37 -0800 Subject: building libjvm with -Os for space optimization - was : RE: RFR: 8234525: enable link-time section-gc for linux s390x to remove unused code In-Reply-To: References: <3bffe1cf-4567-0cf6-4bfb-ad79bd0b9596@oracle.com> Message-ID: On 11/27/19 10:03 AM, Doerr, Martin wrote: > Hi Claes, > > that kind of surprises me. I'd expect files which rather benefit from -O3 to be far less than those which benefit from -Os. > Most performance critical code lives inside the code cache and is not dependent on C++ compiler optimizations. > I'd expect GC code, C2's register allocation and a few runtime files to be the most performance critical C++ code. > So the list of files for -Os may become long. Class loading/verification/resolution are also sensitive to C++ speed. Thanks - Ioi > Yeah, I think we should use native profiling information to find out what's really going on. > > Your idea to change file by file and check for performance regression makes sense to me, though. > > Best regards, > Martin > > >> -----Original Message----- >> From: Claes Redestad >> Sent: Mittwoch, 27. November 2019 18:57 >> To: Baesken, Matthias ; Doerr, Martin >> ; Erik Joelsson ; 'build- >> dev at openjdk.java.net' ; 'hotspot- >> dev at openjdk.java.net' >> Subject: Re: building libjvm with -Os for space optimization - was : RE: RFR: >> 8234525: enable link-time section-gc for linux s390x to remove unused code >> >> Hi, >> >> we discussed doing the opposite for Mac OS X recently, where builds are >> currently set to -Os by default. -O3 helped various networking >> (micro)benchmarks by up to 20%. >> >> Rather than doing -Os by default and then cherry-pick things over to -O3 >> on a case-by-case basis, I'd suggest the opposite: keep -O3 as the >> default, start evaluating -Os on a case-by-case basis. This allows for >> an incremental approach where we identify things that are definitely not >> performance critical, e.g., never shows up in profiles, and switch those >> compilation units over to -Os. Check for harmful performance impact and >> expected footprint improvement; rinse; repeat. >> >> $.02 >> >> /Claes >> >> >> On 2019-11-27 17:36, Baesken, Matthias wrote: >>> Hello Martin, I checked building libjvm.so with -Os (instead of -O3) . >>> >>> I used gcc-7 on linux x86_64 . >>> The size of libjvm.so dropped from 24M (normal night make with -O3) >> to 18M ( test make with -Os) . >>> (adding the link-time gc might reduce the size by another ~ 10 % , but >> those 2 builds were without the ltgc ) >>> Cannot say much so far about performance impact . >>> >>> Best regards, Matthias >>> >>> >>> >>>> Hi Matthias and Erik, >>>> >>>> I also think this is an interesting option. >>>> >>>> I like the idea to generate smaller libraries. In addition to that, I could also >>>> imagine building with -Os (size optimized) by default and only select -O3 >> for >>>> performance critical files (e.g. C2's register allocation, some gc code, ...). >>>> >>>> If we want to go into such a direction for all linux platforms and want to >> use >>>> this s390 only change as some kind of pipe cleaner, I think this change is >> fine >>>> and can get pushed. >>>> Otherwise, I think building s390 differently and not intending to do the >> same >>>> for other linux platforms would be not so good. >>>> >>>> We should only make sure the exported symbols are set up properly to >> avoid >>>> that this optimization throws out too much. >>>> >>>> My 50 Cents. >>>> >>>> Best regards, >>>> Martin >>>>